00:00:00.001 Started by upstream project "autotest-per-patch" build number 132425 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.122 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.180 Fetching changes from the remote Git repository 00:00:00.182 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.237 Using shallow fetch with depth 1 00:00:00.237 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.237 > git --version # timeout=10 00:00:00.276 > git --version # 'git version 2.39.2' 00:00:00.276 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.301 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.301 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.321 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.334 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.347 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.347 > git config core.sparsecheckout # timeout=10 00:00:07.360 > git read-tree -mu HEAD # timeout=10 00:00:07.378 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.399 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.399 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.486 [Pipeline] Start of Pipeline 00:00:07.500 [Pipeline] library 00:00:07.501 Loading library shm_lib@master 00:00:07.501 Library shm_lib@master is cached. Copying from home. 00:00:07.514 [Pipeline] node 00:00:07.523 Running on CYP12 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.524 [Pipeline] { 00:00:07.534 [Pipeline] catchError 00:00:07.535 [Pipeline] { 00:00:07.546 [Pipeline] wrap 00:00:07.552 [Pipeline] { 00:00:07.561 [Pipeline] stage 00:00:07.562 [Pipeline] { (Prologue) 00:00:07.772 [Pipeline] sh 00:00:08.059 + logger -p user.info -t JENKINS-CI 00:00:08.079 [Pipeline] echo 00:00:08.081 Node: CYP12 00:00:08.088 [Pipeline] sh 00:00:08.396 [Pipeline] setCustomBuildProperty 00:00:08.410 [Pipeline] echo 00:00:08.411 Cleanup processes 00:00:08.416 [Pipeline] sh 00:00:08.702 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.702 3233518 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.717 [Pipeline] sh 00:00:09.007 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.007 ++ grep -v 'sudo pgrep' 00:00:09.007 ++ awk '{print $1}' 00:00:09.007 + sudo kill -9 00:00:09.007 + true 00:00:09.031 [Pipeline] cleanWs 00:00:09.044 [WS-CLEANUP] Deleting project workspace... 00:00:09.044 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.051 [WS-CLEANUP] done 00:00:09.056 [Pipeline] setCustomBuildProperty 00:00:09.073 [Pipeline] sh 00:00:09.366 + sudo git config --global --replace-all safe.directory '*' 00:00:09.464 [Pipeline] httpRequest 00:00:11.976 [Pipeline] echo 00:00:11.978 Sorcerer 10.211.164.101 is alive 00:00:11.989 [Pipeline] retry 00:00:11.991 [Pipeline] { 00:00:12.005 [Pipeline] httpRequest 00:00:12.010 HttpMethod: GET 00:00:12.011 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.011 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.029 Response Code: HTTP/1.1 200 OK 00:00:12.029 Success: Status code 200 is in the accepted range: 200,404 00:00:12.029 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.246 [Pipeline] } 00:00:18.265 [Pipeline] // retry 00:00:18.273 [Pipeline] sh 00:00:18.562 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.580 [Pipeline] httpRequest 00:00:18.956 [Pipeline] echo 00:00:18.958 Sorcerer 10.211.164.101 is alive 00:00:18.966 [Pipeline] retry 00:00:18.968 [Pipeline] { 00:00:18.981 [Pipeline] httpRequest 00:00:18.985 HttpMethod: GET 00:00:18.985 URL: http://10.211.164.101/packages/spdk_2e015e34ff96df01d0b187f7b8e7e8a9485b9b60.tar.gz 00:00:18.986 Sending request to url: http://10.211.164.101/packages/spdk_2e015e34ff96df01d0b187f7b8e7e8a9485b9b60.tar.gz 00:00:18.998 Response Code: HTTP/1.1 200 OK 00:00:18.999 Success: Status code 200 is in the accepted range: 200,404 00:00:18.999 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_2e015e34ff96df01d0b187f7b8e7e8a9485b9b60.tar.gz 00:01:14.675 [Pipeline] } 00:01:14.692 [Pipeline] // retry 00:01:14.700 [Pipeline] sh 00:01:14.989 + tar --no-same-owner -xf spdk_2e015e34ff96df01d0b187f7b8e7e8a9485b9b60.tar.gz 00:01:18.305 [Pipeline] sh 00:01:18.594 + git -C spdk log --oneline -n5 00:01:18.594 2e015e34f bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:01:18.594 aae11995f bdev/malloc: Fix unexpected DIF verification error for initial read 00:01:18.594 7bc1aace1 dif: Set DIF field to 0 explicitly if its check is disabled 00:01:18.594 ce2cd8dc9 bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:01:18.594 2d31d77ac ut/bdev: Remove duplication with many stups among unit test files 00:01:18.607 [Pipeline] } 00:01:18.621 [Pipeline] // stage 00:01:18.629 [Pipeline] stage 00:01:18.631 [Pipeline] { (Prepare) 00:01:18.646 [Pipeline] writeFile 00:01:18.660 [Pipeline] sh 00:01:18.949 + logger -p user.info -t JENKINS-CI 00:01:18.963 [Pipeline] sh 00:01:19.249 + logger -p user.info -t JENKINS-CI 00:01:19.264 [Pipeline] sh 00:01:19.553 + cat autorun-spdk.conf 00:01:19.553 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.553 SPDK_TEST_NVMF=1 00:01:19.553 SPDK_TEST_NVME_CLI=1 00:01:19.553 SPDK_TEST_NVMF_NICS=mlx5 00:01:19.553 SPDK_RUN_UBSAN=1 00:01:19.553 NET_TYPE=phy 00:01:19.562 RUN_NIGHTLY=0 00:01:19.566 [Pipeline] readFile 00:01:19.591 [Pipeline] withEnv 00:01:19.594 [Pipeline] { 00:01:19.605 [Pipeline] sh 00:01:19.893 + set -ex 00:01:19.893 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:19.893 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:19.893 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.893 ++ SPDK_TEST_NVMF=1 00:01:19.893 ++ SPDK_TEST_NVME_CLI=1 00:01:19.893 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:19.893 ++ SPDK_RUN_UBSAN=1 00:01:19.893 ++ NET_TYPE=phy 00:01:19.893 ++ RUN_NIGHTLY=0 00:01:19.893 + case $SPDK_TEST_NVMF_NICS in 00:01:19.893 + DRIVERS=mlx5_ib 00:01:19.893 + [[ -n mlx5_ib ]] 00:01:19.893 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.893 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.893 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:19.893 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.893 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.893 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.893 + true 00:01:19.893 + for D in $DRIVERS 00:01:19.893 + sudo modprobe mlx5_ib 00:01:20.155 + exit 0 00:01:20.166 [Pipeline] } 00:01:20.183 [Pipeline] // withEnv 00:01:20.188 [Pipeline] } 00:01:20.203 [Pipeline] // stage 00:01:20.213 [Pipeline] catchError 00:01:20.215 [Pipeline] { 00:01:20.228 [Pipeline] timeout 00:01:20.229 Timeout set to expire in 1 hr 0 min 00:01:20.230 [Pipeline] { 00:01:20.244 [Pipeline] stage 00:01:20.246 [Pipeline] { (Tests) 00:01:20.260 [Pipeline] sh 00:01:20.549 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:20.549 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:20.549 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:20.549 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:20.549 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:20.549 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:20.549 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:20.549 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:20.549 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:20.549 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:20.549 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:20.549 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:20.549 + source /etc/os-release 00:01:20.549 ++ NAME='Fedora Linux' 00:01:20.549 ++ VERSION='39 (Cloud Edition)' 00:01:20.549 ++ ID=fedora 00:01:20.549 ++ VERSION_ID=39 00:01:20.549 ++ VERSION_CODENAME= 00:01:20.549 ++ PLATFORM_ID=platform:f39 00:01:20.549 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.549 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.549 ++ LOGO=fedora-logo-icon 00:01:20.549 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.549 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.549 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.549 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.549 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.549 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.549 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.549 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.549 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.549 ++ SUPPORT_END=2024-11-12 00:01:20.549 ++ VARIANT='Cloud Edition' 00:01:20.549 ++ VARIANT_ID=cloud 00:01:20.549 + uname -a 00:01:20.549 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.549 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:23.854 Hugepages 00:01:23.854 node hugesize free / total 00:01:23.854 node0 1048576kB 0 / 0 00:01:23.854 node0 2048kB 0 / 0 00:01:23.854 node1 1048576kB 0 / 0 00:01:23.854 node1 2048kB 0 / 0 00:01:23.854 00:01:23.854 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.854 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:23.854 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:23.854 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:23.854 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:23.854 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:23.854 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:23.854 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:23.854 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:23.854 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:23.854 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:23.854 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:23.854 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:23.854 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:23.854 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:23.854 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:23.854 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:23.854 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:23.854 + rm -f /tmp/spdk-ld-path 00:01:23.854 + source autorun-spdk.conf 00:01:23.854 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.854 ++ SPDK_TEST_NVMF=1 00:01:23.854 ++ SPDK_TEST_NVME_CLI=1 00:01:23.854 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:23.854 ++ SPDK_RUN_UBSAN=1 00:01:23.854 ++ NET_TYPE=phy 00:01:23.854 ++ RUN_NIGHTLY=0 00:01:23.854 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.854 + [[ -n '' ]] 00:01:23.854 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:23.854 + for M in /var/spdk/build-*-manifest.txt 00:01:23.854 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.854 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:23.854 + for M in /var/spdk/build-*-manifest.txt 00:01:23.854 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.854 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:23.854 + for M in /var/spdk/build-*-manifest.txt 00:01:23.854 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.854 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:23.854 ++ uname 00:01:23.854 + [[ Linux == \L\i\n\u\x ]] 00:01:23.854 + sudo dmesg -T 00:01:23.854 + sudo dmesg --clear 00:01:23.854 + dmesg_pid=3234643 00:01:23.854 + [[ Fedora Linux == FreeBSD ]] 00:01:23.854 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.854 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.854 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.854 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.854 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.855 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.855 + sudo dmesg -Tw 00:01:23.855 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.855 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.855 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.855 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.855 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.855 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.855 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.855 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.855 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:23.855 17:46:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:23.855 17:46:42 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:23.855 17:46:42 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.855 17:46:42 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:23.855 17:46:42 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:23.855 17:46:42 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:23.855 17:46:42 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:23.855 17:46:42 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:01:23.855 17:46:42 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:01:23.855 17:46:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:23.855 17:46:42 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:24.117 17:46:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.117 17:46:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:24.117 17:46:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:24.117 17:46:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.117 17:46:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.117 17:46:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.117 17:46:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.117 17:46:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.117 17:46:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.117 17:46:42 -- paths/export.sh@5 -- $ export PATH 00:01:24.117 17:46:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.117 17:46:42 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:24.117 17:46:42 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:24.117 17:46:42 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732121202.XXXXXX 00:01:24.117 17:46:42 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732121202.zvZ0yl 00:01:24.117 17:46:42 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:24.117 17:46:42 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:24.117 17:46:42 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:24.117 17:46:42 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:24.117 17:46:42 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.117 17:46:42 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:24.117 17:46:42 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:24.117 17:46:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.117 17:46:42 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:24.117 17:46:42 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:24.117 17:46:42 -- pm/common@17 -- $ local monitor 00:01:24.117 17:46:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.117 17:46:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.117 17:46:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.117 17:46:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.117 17:46:42 -- pm/common@25 -- $ sleep 1 00:01:24.117 17:46:42 -- pm/common@21 -- $ date +%s 00:01:24.117 17:46:42 -- pm/common@21 -- $ date +%s 00:01:24.117 17:46:42 -- pm/common@21 -- $ date +%s 00:01:24.117 17:46:42 -- pm/common@21 -- $ date +%s 00:01:24.117 17:46:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732121202 00:01:24.117 17:46:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732121202 00:01:24.117 17:46:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732121202 00:01:24.117 17:46:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732121202 00:01:24.117 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732121202_collect-vmstat.pm.log 00:01:24.117 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732121202_collect-cpu-load.pm.log 00:01:24.117 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732121202_collect-cpu-temp.pm.log 00:01:24.117 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732121202_collect-bmc-pm.bmc.pm.log 00:01:25.105 17:46:43 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:25.105 17:46:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.105 17:46:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.105 17:46:43 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:25.105 17:46:43 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.105 Wed Nov 20 04:46:43 PM UTC 2024 00:01:25.105 17:46:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.105 v25.01-pre-235-g2e015e34f 00:01:25.105 17:46:43 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.105 17:46:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.105 17:46:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.105 17:46:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.105 17:46:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.105 17:46:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.105 ************************************ 00:01:25.105 START TEST ubsan 00:01:25.105 ************************************ 00:01:25.105 17:46:43 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:25.105 using ubsan 00:01:25.105 00:01:25.105 real 0m0.000s 00:01:25.105 user 0m0.000s 00:01:25.105 sys 0m0.000s 00:01:25.105 17:46:43 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:25.105 17:46:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.105 ************************************ 00:01:25.105 END TEST ubsan 00:01:25.105 ************************************ 00:01:25.105 17:46:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.105 17:46:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.105 17:46:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.105 17:46:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.105 17:46:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.105 17:46:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.105 17:46:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.105 17:46:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.105 17:46:43 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:25.105 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:25.105 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:25.678 Using 'verbs' RDMA provider 00:01:41.532 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:53.763 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:53.763 Creating mk/config.mk...done. 00:01:53.763 Creating mk/cc.flags.mk...done. 00:01:53.763 Type 'make' to build. 00:01:53.763 17:47:12 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:53.763 17:47:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:53.763 17:47:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:53.763 17:47:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.763 ************************************ 00:01:53.763 START TEST make 00:01:53.763 ************************************ 00:01:53.763 17:47:12 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:54.025 make[1]: Nothing to be done for 'all'. 00:02:02.169 The Meson build system 00:02:02.169 Version: 1.5.0 00:02:02.169 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:02.169 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:02.169 Build type: native build 00:02:02.169 Program cat found: YES (/usr/bin/cat) 00:02:02.169 Project name: DPDK 00:02:02.169 Project version: 24.03.0 00:02:02.169 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:02.169 C linker for the host machine: cc ld.bfd 2.40-14 00:02:02.169 Host machine cpu family: x86_64 00:02:02.169 Host machine cpu: x86_64 00:02:02.169 Message: ## Building in Developer Mode ## 00:02:02.169 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.169 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.169 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.169 Program python3 found: YES (/usr/bin/python3) 00:02:02.169 Program cat found: YES (/usr/bin/cat) 00:02:02.169 Compiler for C supports arguments -march=native: YES 00:02:02.169 Checking for size of "void *" : 8 00:02:02.169 Checking for size of "void *" : 8 (cached) 00:02:02.169 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:02.169 Library m found: YES 00:02:02.169 Library numa found: YES 00:02:02.169 Has header "numaif.h" : YES 00:02:02.169 Library fdt found: NO 00:02:02.169 Library execinfo found: NO 00:02:02.169 Has header "execinfo.h" : YES 00:02:02.169 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.169 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.169 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.169 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.169 Run-time dependency openssl found: YES 3.1.1 00:02:02.169 Run-time dependency libpcap found: YES 1.10.4 00:02:02.169 Has header "pcap.h" with dependency libpcap: YES 00:02:02.169 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.169 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.169 Compiler for C supports arguments -Wformat: YES 00:02:02.169 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.169 Compiler for C supports arguments -Wformat-security: NO 00:02:02.169 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.169 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.169 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.169 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.169 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.169 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.169 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.169 Compiler for C supports arguments -Wundef: YES 00:02:02.169 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.169 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.169 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.169 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.169 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.169 Program objdump found: YES (/usr/bin/objdump) 00:02:02.169 Compiler for C supports arguments -mavx512f: YES 00:02:02.169 Checking if "AVX512 checking" compiles: YES 00:02:02.169 Fetching value of define "__SSE4_2__" : 1 00:02:02.169 Fetching value of define "__AES__" : 1 00:02:02.169 Fetching value of define "__AVX__" : 1 00:02:02.169 Fetching value of define "__AVX2__" : 1 00:02:02.169 Fetching value of define "__AVX512BW__" : 1 00:02:02.169 Fetching value of define "__AVX512CD__" : 1 00:02:02.169 Fetching value of define "__AVX512DQ__" : 1 00:02:02.169 Fetching value of define "__AVX512F__" : 1 00:02:02.169 Fetching value of define "__AVX512VL__" : 1 00:02:02.169 Fetching value of define "__PCLMUL__" : 1 00:02:02.169 Fetching value of define "__RDRND__" : 1 00:02:02.169 Fetching value of define "__RDSEED__" : 1 00:02:02.169 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:02.169 Fetching value of define "__znver1__" : (undefined) 00:02:02.169 Fetching value of define "__znver2__" : (undefined) 00:02:02.169 Fetching value of define "__znver3__" : (undefined) 00:02:02.169 Fetching value of define "__znver4__" : (undefined) 00:02:02.169 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.169 Message: lib/log: Defining dependency "log" 00:02:02.169 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.169 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.169 Checking for function "getentropy" : NO 00:02:02.169 Message: lib/eal: Defining dependency "eal" 00:02:02.169 Message: lib/ring: Defining dependency "ring" 00:02:02.169 Message: lib/rcu: Defining dependency "rcu" 00:02:02.169 Message: lib/mempool: Defining dependency "mempool" 00:02:02.169 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.169 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.169 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.169 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.169 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.169 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.169 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:02.169 Compiler for C supports arguments -mpclmul: YES 00:02:02.169 Compiler for C supports arguments -maes: YES 00:02:02.169 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.169 Compiler for C supports arguments -mavx512bw: YES 00:02:02.169 Compiler for C supports arguments -mavx512dq: YES 00:02:02.169 Compiler for C supports arguments -mavx512vl: YES 00:02:02.169 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.169 Compiler for C supports arguments -mavx2: YES 00:02:02.169 Compiler for C supports arguments -mavx: YES 00:02:02.169 Message: lib/net: Defining dependency "net" 00:02:02.169 Message: lib/meter: Defining dependency "meter" 00:02:02.169 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.169 Message: lib/pci: Defining dependency "pci" 00:02:02.169 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.169 Message: lib/hash: Defining dependency "hash" 00:02:02.169 Message: lib/timer: Defining dependency "timer" 00:02:02.169 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.169 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.169 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.169 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.169 Message: lib/power: Defining dependency "power" 00:02:02.169 Message: lib/reorder: Defining dependency "reorder" 00:02:02.169 Message: lib/security: Defining dependency "security" 00:02:02.169 Has header "linux/userfaultfd.h" : YES 00:02:02.169 Has header "linux/vduse.h" : YES 00:02:02.169 Message: lib/vhost: Defining dependency "vhost" 00:02:02.169 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.169 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.169 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.169 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.169 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.169 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.169 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.169 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.169 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.169 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.169 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:02.169 Configuring doxy-api-html.conf using configuration 00:02:02.169 Configuring doxy-api-man.conf using configuration 00:02:02.169 Program mandb found: YES (/usr/bin/mandb) 00:02:02.169 Program sphinx-build found: NO 00:02:02.169 Configuring rte_build_config.h using configuration 00:02:02.169 Message: 00:02:02.169 ================= 00:02:02.169 Applications Enabled 00:02:02.169 ================= 00:02:02.169 00:02:02.169 apps: 00:02:02.169 00:02:02.169 00:02:02.169 Message: 00:02:02.169 ================= 00:02:02.169 Libraries Enabled 00:02:02.169 ================= 00:02:02.169 00:02:02.169 libs: 00:02:02.169 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.169 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.169 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.169 00:02:02.169 Message: 00:02:02.169 =============== 00:02:02.169 Drivers Enabled 00:02:02.169 =============== 00:02:02.169 00:02:02.169 common: 00:02:02.169 00:02:02.169 bus: 00:02:02.169 pci, vdev, 00:02:02.169 mempool: 00:02:02.169 ring, 00:02:02.169 dma: 00:02:02.169 00:02:02.169 net: 00:02:02.169 00:02:02.169 crypto: 00:02:02.169 00:02:02.169 compress: 00:02:02.169 00:02:02.169 vdpa: 00:02:02.169 00:02:02.169 00:02:02.169 Message: 00:02:02.169 ================= 00:02:02.169 Content Skipped 00:02:02.169 ================= 00:02:02.169 00:02:02.169 apps: 00:02:02.169 dumpcap: explicitly disabled via build config 00:02:02.169 graph: explicitly disabled via build config 00:02:02.169 pdump: explicitly disabled via build config 00:02:02.169 proc-info: explicitly disabled via build config 00:02:02.169 test-acl: explicitly disabled via build config 00:02:02.170 test-bbdev: explicitly disabled via build config 00:02:02.170 test-cmdline: explicitly disabled via build config 00:02:02.170 test-compress-perf: explicitly disabled via build config 00:02:02.170 test-crypto-perf: explicitly disabled via build config 00:02:02.170 test-dma-perf: explicitly disabled via build config 00:02:02.170 test-eventdev: explicitly disabled via build config 00:02:02.170 test-fib: explicitly disabled via build config 00:02:02.170 test-flow-perf: explicitly disabled via build config 00:02:02.170 test-gpudev: explicitly disabled via build config 00:02:02.170 test-mldev: explicitly disabled via build config 00:02:02.170 test-pipeline: explicitly disabled via build config 00:02:02.170 test-pmd: explicitly disabled via build config 00:02:02.170 test-regex: explicitly disabled via build config 00:02:02.170 test-sad: explicitly disabled via build config 00:02:02.170 test-security-perf: explicitly disabled via build config 00:02:02.170 00:02:02.170 libs: 00:02:02.170 argparse: explicitly disabled via build config 00:02:02.170 metrics: explicitly disabled via build config 00:02:02.170 acl: explicitly disabled via build config 00:02:02.170 bbdev: explicitly disabled via build config 00:02:02.170 bitratestats: explicitly disabled via build config 00:02:02.170 bpf: explicitly disabled via build config 00:02:02.170 cfgfile: explicitly disabled via build config 00:02:02.170 distributor: explicitly disabled via build config 00:02:02.170 efd: explicitly disabled via build config 00:02:02.170 eventdev: explicitly disabled via build config 00:02:02.170 dispatcher: explicitly disabled via build config 00:02:02.170 gpudev: explicitly disabled via build config 00:02:02.170 gro: explicitly disabled via build config 00:02:02.170 gso: explicitly disabled via build config 00:02:02.170 ip_frag: explicitly disabled via build config 00:02:02.170 jobstats: explicitly disabled via build config 00:02:02.170 latencystats: explicitly disabled via build config 00:02:02.170 lpm: explicitly disabled via build config 00:02:02.170 member: explicitly disabled via build config 00:02:02.170 pcapng: explicitly disabled via build config 00:02:02.170 rawdev: explicitly disabled via build config 00:02:02.170 regexdev: explicitly disabled via build config 00:02:02.170 mldev: explicitly disabled via build config 00:02:02.170 rib: explicitly disabled via build config 00:02:02.170 sched: explicitly disabled via build config 00:02:02.170 stack: explicitly disabled via build config 00:02:02.170 ipsec: explicitly disabled via build config 00:02:02.170 pdcp: explicitly disabled via build config 00:02:02.170 fib: explicitly disabled via build config 00:02:02.170 port: explicitly disabled via build config 00:02:02.170 pdump: explicitly disabled via build config 00:02:02.170 table: explicitly disabled via build config 00:02:02.170 pipeline: explicitly disabled via build config 00:02:02.170 graph: explicitly disabled via build config 00:02:02.170 node: explicitly disabled via build config 00:02:02.170 00:02:02.170 drivers: 00:02:02.170 common/cpt: not in enabled drivers build config 00:02:02.170 common/dpaax: not in enabled drivers build config 00:02:02.170 common/iavf: not in enabled drivers build config 00:02:02.170 common/idpf: not in enabled drivers build config 00:02:02.170 common/ionic: not in enabled drivers build config 00:02:02.170 common/mvep: not in enabled drivers build config 00:02:02.170 common/octeontx: not in enabled drivers build config 00:02:02.170 bus/auxiliary: not in enabled drivers build config 00:02:02.170 bus/cdx: not in enabled drivers build config 00:02:02.170 bus/dpaa: not in enabled drivers build config 00:02:02.170 bus/fslmc: not in enabled drivers build config 00:02:02.170 bus/ifpga: not in enabled drivers build config 00:02:02.170 bus/platform: not in enabled drivers build config 00:02:02.170 bus/uacce: not in enabled drivers build config 00:02:02.170 bus/vmbus: not in enabled drivers build config 00:02:02.170 common/cnxk: not in enabled drivers build config 00:02:02.170 common/mlx5: not in enabled drivers build config 00:02:02.170 common/nfp: not in enabled drivers build config 00:02:02.170 common/nitrox: not in enabled drivers build config 00:02:02.170 common/qat: not in enabled drivers build config 00:02:02.170 common/sfc_efx: not in enabled drivers build config 00:02:02.170 mempool/bucket: not in enabled drivers build config 00:02:02.170 mempool/cnxk: not in enabled drivers build config 00:02:02.170 mempool/dpaa: not in enabled drivers build config 00:02:02.170 mempool/dpaa2: not in enabled drivers build config 00:02:02.170 mempool/octeontx: not in enabled drivers build config 00:02:02.170 mempool/stack: not in enabled drivers build config 00:02:02.170 dma/cnxk: not in enabled drivers build config 00:02:02.170 dma/dpaa: not in enabled drivers build config 00:02:02.170 dma/dpaa2: not in enabled drivers build config 00:02:02.170 dma/hisilicon: not in enabled drivers build config 00:02:02.170 dma/idxd: not in enabled drivers build config 00:02:02.170 dma/ioat: not in enabled drivers build config 00:02:02.170 dma/skeleton: not in enabled drivers build config 00:02:02.170 net/af_packet: not in enabled drivers build config 00:02:02.170 net/af_xdp: not in enabled drivers build config 00:02:02.170 net/ark: not in enabled drivers build config 00:02:02.170 net/atlantic: not in enabled drivers build config 00:02:02.170 net/avp: not in enabled drivers build config 00:02:02.170 net/axgbe: not in enabled drivers build config 00:02:02.170 net/bnx2x: not in enabled drivers build config 00:02:02.170 net/bnxt: not in enabled drivers build config 00:02:02.170 net/bonding: not in enabled drivers build config 00:02:02.170 net/cnxk: not in enabled drivers build config 00:02:02.170 net/cpfl: not in enabled drivers build config 00:02:02.170 net/cxgbe: not in enabled drivers build config 00:02:02.170 net/dpaa: not in enabled drivers build config 00:02:02.170 net/dpaa2: not in enabled drivers build config 00:02:02.170 net/e1000: not in enabled drivers build config 00:02:02.170 net/ena: not in enabled drivers build config 00:02:02.170 net/enetc: not in enabled drivers build config 00:02:02.170 net/enetfec: not in enabled drivers build config 00:02:02.170 net/enic: not in enabled drivers build config 00:02:02.170 net/failsafe: not in enabled drivers build config 00:02:02.170 net/fm10k: not in enabled drivers build config 00:02:02.170 net/gve: not in enabled drivers build config 00:02:02.170 net/hinic: not in enabled drivers build config 00:02:02.170 net/hns3: not in enabled drivers build config 00:02:02.170 net/i40e: not in enabled drivers build config 00:02:02.170 net/iavf: not in enabled drivers build config 00:02:02.170 net/ice: not in enabled drivers build config 00:02:02.170 net/idpf: not in enabled drivers build config 00:02:02.170 net/igc: not in enabled drivers build config 00:02:02.170 net/ionic: not in enabled drivers build config 00:02:02.170 net/ipn3ke: not in enabled drivers build config 00:02:02.170 net/ixgbe: not in enabled drivers build config 00:02:02.170 net/mana: not in enabled drivers build config 00:02:02.170 net/memif: not in enabled drivers build config 00:02:02.170 net/mlx4: not in enabled drivers build config 00:02:02.170 net/mlx5: not in enabled drivers build config 00:02:02.170 net/mvneta: not in enabled drivers build config 00:02:02.170 net/mvpp2: not in enabled drivers build config 00:02:02.170 net/netvsc: not in enabled drivers build config 00:02:02.170 net/nfb: not in enabled drivers build config 00:02:02.170 net/nfp: not in enabled drivers build config 00:02:02.170 net/ngbe: not in enabled drivers build config 00:02:02.170 net/null: not in enabled drivers build config 00:02:02.170 net/octeontx: not in enabled drivers build config 00:02:02.170 net/octeon_ep: not in enabled drivers build config 00:02:02.170 net/pcap: not in enabled drivers build config 00:02:02.170 net/pfe: not in enabled drivers build config 00:02:02.170 net/qede: not in enabled drivers build config 00:02:02.170 net/ring: not in enabled drivers build config 00:02:02.170 net/sfc: not in enabled drivers build config 00:02:02.170 net/softnic: not in enabled drivers build config 00:02:02.170 net/tap: not in enabled drivers build config 00:02:02.170 net/thunderx: not in enabled drivers build config 00:02:02.170 net/txgbe: not in enabled drivers build config 00:02:02.170 net/vdev_netvsc: not in enabled drivers build config 00:02:02.170 net/vhost: not in enabled drivers build config 00:02:02.170 net/virtio: not in enabled drivers build config 00:02:02.170 net/vmxnet3: not in enabled drivers build config 00:02:02.170 raw/*: missing internal dependency, "rawdev" 00:02:02.170 crypto/armv8: not in enabled drivers build config 00:02:02.170 crypto/bcmfs: not in enabled drivers build config 00:02:02.170 crypto/caam_jr: not in enabled drivers build config 00:02:02.170 crypto/ccp: not in enabled drivers build config 00:02:02.170 crypto/cnxk: not in enabled drivers build config 00:02:02.170 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.170 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.170 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.170 crypto/mlx5: not in enabled drivers build config 00:02:02.170 crypto/mvsam: not in enabled drivers build config 00:02:02.170 crypto/nitrox: not in enabled drivers build config 00:02:02.170 crypto/null: not in enabled drivers build config 00:02:02.170 crypto/octeontx: not in enabled drivers build config 00:02:02.170 crypto/openssl: not in enabled drivers build config 00:02:02.170 crypto/scheduler: not in enabled drivers build config 00:02:02.170 crypto/uadk: not in enabled drivers build config 00:02:02.170 crypto/virtio: not in enabled drivers build config 00:02:02.170 compress/isal: not in enabled drivers build config 00:02:02.170 compress/mlx5: not in enabled drivers build config 00:02:02.170 compress/nitrox: not in enabled drivers build config 00:02:02.170 compress/octeontx: not in enabled drivers build config 00:02:02.170 compress/zlib: not in enabled drivers build config 00:02:02.170 regex/*: missing internal dependency, "regexdev" 00:02:02.170 ml/*: missing internal dependency, "mldev" 00:02:02.170 vdpa/ifc: not in enabled drivers build config 00:02:02.170 vdpa/mlx5: not in enabled drivers build config 00:02:02.170 vdpa/nfp: not in enabled drivers build config 00:02:02.170 vdpa/sfc: not in enabled drivers build config 00:02:02.170 event/*: missing internal dependency, "eventdev" 00:02:02.170 baseband/*: missing internal dependency, "bbdev" 00:02:02.170 gpu/*: missing internal dependency, "gpudev" 00:02:02.170 00:02:02.170 00:02:02.432 Build targets in project: 84 00:02:02.432 00:02:02.432 DPDK 24.03.0 00:02:02.432 00:02:02.432 User defined options 00:02:02.432 buildtype : debug 00:02:02.432 default_library : shared 00:02:02.432 libdir : lib 00:02:02.432 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:02.432 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:02.432 c_link_args : 00:02:02.432 cpu_instruction_set: native 00:02:02.432 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:02.432 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:02.432 enable_docs : false 00:02:02.432 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:02.432 enable_kmods : false 00:02:02.432 max_lcores : 128 00:02:02.432 tests : false 00:02:02.432 00:02:02.432 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.013 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:03.013 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.013 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.013 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.013 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.013 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.013 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.013 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.013 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.013 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.013 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:03.013 [11/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:03.013 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.013 [13/267] Linking static target lib/librte_kvargs.a 00:02:03.013 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:03.013 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:03.013 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.013 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:03.273 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:03.273 [19/267] Linking static target lib/librte_log.a 00:02:03.273 [20/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.273 [21/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.273 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:03.273 [23/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:03.273 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:03.273 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:03.273 [26/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:03.273 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:03.273 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:03.273 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:03.273 [30/267] Linking static target lib/librte_pci.a 00:02:03.273 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:03.273 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:03.273 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.273 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.273 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:03.273 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:03.273 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.273 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:03.532 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.532 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.532 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.532 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.532 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.532 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.532 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.532 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.532 [47/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.532 [48/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:03.532 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.532 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.532 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.532 [52/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.532 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.532 [54/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:03.532 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.532 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.532 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.532 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.532 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.532 [60/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.532 [61/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:03.532 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.532 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:03.532 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.532 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.532 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.532 [67/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.532 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.532 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.532 [70/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:03.532 [71/267] Linking static target lib/librte_meter.a 00:02:03.532 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.532 [73/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:03.532 [74/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.532 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.532 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.532 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.532 [78/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.532 [79/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:03.532 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.532 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.532 [82/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:03.532 [83/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.532 [84/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.532 [85/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:03.532 [86/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.532 [87/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.532 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.532 [89/267] Linking static target lib/librte_telemetry.a 00:02:03.532 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.532 [91/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:03.532 [92/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:03.532 [93/267] Linking static target lib/librte_timer.a 00:02:03.532 [94/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:03.532 [95/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.532 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.532 [97/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:03.532 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.532 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.532 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.793 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.793 [102/267] Linking static target lib/librte_ring.a 00:02:03.793 [103/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.793 [104/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:03.793 [105/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:03.793 [106/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:03.793 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.793 [108/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:03.793 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:03.793 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:03.794 [111/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:03.794 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:03.794 [113/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.794 [114/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:03.794 [115/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.794 [116/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:03.794 [117/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:03.794 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.794 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.794 [120/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.794 [121/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:03.794 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:03.794 [123/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:03.794 [124/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.794 [125/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:03.794 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:03.794 [127/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.794 [128/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:03.794 [129/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:03.794 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.794 [131/267] Linking static target lib/librte_mempool.a 00:02:03.794 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:03.794 [133/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:03.794 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:03.794 [135/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:03.794 [136/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.794 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:03.794 [138/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:03.794 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:03.794 [140/267] Linking static target lib/librte_dmadev.a 00:02:03.794 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:03.794 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:03.794 [143/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:03.794 [144/267] Linking static target lib/librte_cmdline.a 00:02:03.794 [145/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:03.794 [146/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:03.794 [147/267] Linking static target lib/librte_compressdev.a 00:02:03.794 [148/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:03.794 [149/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:03.794 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:03.794 [151/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:03.794 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.794 [153/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:03.794 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:03.794 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:03.794 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:03.794 [157/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.794 [158/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:03.794 [159/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:03.794 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:03.794 [161/267] Linking static target lib/librte_net.a 00:02:03.794 [162/267] Linking static target lib/librte_power.a 00:02:03.794 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:03.794 [164/267] Linking static target lib/librte_security.a 00:02:03.794 [165/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.794 [166/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.794 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:03.794 [168/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:03.794 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:03.794 [170/267] Linking static target lib/librte_rcu.a 00:02:03.794 [171/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:03.794 [172/267] Linking static target lib/librte_reorder.a 00:02:03.794 [173/267] Linking target lib/librte_log.so.24.1 00:02:03.794 [174/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.794 [175/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:03.794 [176/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:03.794 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:03.794 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:03.794 [179/267] Linking static target lib/librte_eal.a 00:02:03.794 [180/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.056 [181/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:04.056 [182/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.056 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.056 [184/267] Linking static target lib/librte_mbuf.a 00:02:04.056 [185/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.056 [186/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.056 [187/267] Linking target lib/librte_kvargs.so.24.1 00:02:04.056 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.056 [189/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.056 [190/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.056 [191/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:04.056 [192/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.056 [193/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.056 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:04.056 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.056 [196/267] Linking static target drivers/librte_bus_pci.a 00:02:04.056 [197/267] Linking static target drivers/librte_bus_vdev.a 00:02:04.056 [198/267] Linking static target lib/librte_hash.a 00:02:04.056 [199/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:04.056 [200/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.056 [201/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.056 [202/267] Linking static target drivers/librte_mempool_ring.a 00:02:04.056 [203/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.056 [204/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:04.056 [205/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:04.056 [206/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.317 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.317 [208/267] Linking static target lib/librte_cryptodev.a 00:02:04.317 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.317 [210/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.317 [211/267] Linking target lib/librte_telemetry.so.24.1 00:02:04.317 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.317 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:04.318 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.579 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.579 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.579 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.579 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:04.579 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:04.579 [220/267] Linking static target lib/librte_ethdev.a 00:02:04.840 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.840 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.840 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.840 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.101 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.101 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.688 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:05.688 [228/267] Linking static target lib/librte_vhost.a 00:02:06.260 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.644 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.249 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.822 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.822 [233/267] Linking target lib/librte_eal.so.24.1 00:02:15.083 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:15.083 [235/267] Linking target lib/librte_ring.so.24.1 00:02:15.083 [236/267] Linking target lib/librte_meter.so.24.1 00:02:15.083 [237/267] Linking target lib/librte_pci.so.24.1 00:02:15.083 [238/267] Linking target lib/librte_timer.so.24.1 00:02:15.083 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:15.083 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:15.083 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:15.083 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:15.083 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:15.083 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:15.344 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:15.344 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:15.344 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:15.344 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:15.344 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:15.344 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:15.344 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:15.344 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:15.605 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:15.605 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:02:15.605 [255/267] Linking target lib/librte_net.so.24.1 00:02:15.605 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:15.605 [257/267] Linking target lib/librte_reorder.so.24.1 00:02:15.605 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:15.605 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:15.866 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:15.866 [261/267] Linking target lib/librte_hash.so.24.1 00:02:15.866 [262/267] Linking target lib/librte_security.so.24.1 00:02:15.866 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:15.866 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:15.866 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:15.866 [266/267] Linking target lib/librte_power.so.24.1 00:02:15.866 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:16.127 INFO: autodetecting backend as ninja 00:02:16.127 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:20.333 CC lib/log/log.o 00:02:20.333 CC lib/log/log_flags.o 00:02:20.333 CC lib/log/log_deprecated.o 00:02:20.333 CC lib/ut/ut.o 00:02:20.333 CC lib/ut_mock/mock.o 00:02:20.333 LIB libspdk_log.a 00:02:20.333 LIB libspdk_ut.a 00:02:20.333 LIB libspdk_ut_mock.a 00:02:20.333 SO libspdk_log.so.7.1 00:02:20.333 SO libspdk_ut_mock.so.6.0 00:02:20.333 SO libspdk_ut.so.2.0 00:02:20.333 SYMLINK libspdk_ut_mock.so 00:02:20.333 SYMLINK libspdk_log.so 00:02:20.333 SYMLINK libspdk_ut.so 00:02:20.333 CC lib/ioat/ioat.o 00:02:20.333 CC lib/util/base64.o 00:02:20.333 CC lib/dma/dma.o 00:02:20.333 CC lib/util/bit_array.o 00:02:20.333 CC lib/util/cpuset.o 00:02:20.333 CC lib/util/crc16.o 00:02:20.333 CC lib/util/crc32.o 00:02:20.333 CC lib/util/crc32c.o 00:02:20.333 CC lib/util/crc32_ieee.o 00:02:20.333 CXX lib/trace_parser/trace.o 00:02:20.333 CC lib/util/crc64.o 00:02:20.333 CC lib/util/dif.o 00:02:20.333 CC lib/util/fd.o 00:02:20.333 CC lib/util/fd_group.o 00:02:20.333 CC lib/util/file.o 00:02:20.333 CC lib/util/hexlify.o 00:02:20.333 CC lib/util/iov.o 00:02:20.333 CC lib/util/math.o 00:02:20.333 CC lib/util/pipe.o 00:02:20.333 CC lib/util/net.o 00:02:20.333 CC lib/util/strerror_tls.o 00:02:20.333 CC lib/util/string.o 00:02:20.333 CC lib/util/uuid.o 00:02:20.333 CC lib/util/xor.o 00:02:20.333 CC lib/util/zipf.o 00:02:20.333 CC lib/util/md5.o 00:02:20.593 CC lib/vfio_user/host/vfio_user_pci.o 00:02:20.593 CC lib/vfio_user/host/vfio_user.o 00:02:20.593 LIB libspdk_dma.a 00:02:20.593 SO libspdk_dma.so.5.0 00:02:20.593 LIB libspdk_ioat.a 00:02:20.593 SO libspdk_ioat.so.7.0 00:02:20.593 SYMLINK libspdk_dma.so 00:02:20.858 SYMLINK libspdk_ioat.so 00:02:20.858 LIB libspdk_vfio_user.a 00:02:20.858 SO libspdk_vfio_user.so.5.0 00:02:20.858 LIB libspdk_util.a 00:02:20.858 SYMLINK libspdk_vfio_user.so 00:02:20.858 SO libspdk_util.so.10.1 00:02:21.123 LIB libspdk_trace_parser.a 00:02:21.123 SO libspdk_trace_parser.so.6.0 00:02:21.123 SYMLINK libspdk_util.so 00:02:21.123 SYMLINK libspdk_trace_parser.so 00:02:21.385 CC lib/rdma_utils/rdma_utils.o 00:02:21.385 CC lib/idxd/idxd.o 00:02:21.385 CC lib/idxd/idxd_user.o 00:02:21.385 CC lib/json/json_parse.o 00:02:21.385 CC lib/idxd/idxd_kernel.o 00:02:21.385 CC lib/json/json_util.o 00:02:21.385 CC lib/json/json_write.o 00:02:21.385 CC lib/vmd/vmd.o 00:02:21.385 CC lib/vmd/led.o 00:02:21.385 CC lib/conf/conf.o 00:02:21.385 CC lib/env_dpdk/env.o 00:02:21.385 CC lib/env_dpdk/memory.o 00:02:21.385 CC lib/env_dpdk/pci.o 00:02:21.385 CC lib/env_dpdk/init.o 00:02:21.385 CC lib/env_dpdk/threads.o 00:02:21.385 CC lib/env_dpdk/pci_ioat.o 00:02:21.385 CC lib/env_dpdk/pci_virtio.o 00:02:21.385 CC lib/env_dpdk/pci_vmd.o 00:02:21.385 CC lib/env_dpdk/pci_idxd.o 00:02:21.385 CC lib/env_dpdk/pci_event.o 00:02:21.385 CC lib/env_dpdk/sigbus_handler.o 00:02:21.385 CC lib/env_dpdk/pci_dpdk.o 00:02:21.385 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:21.385 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:21.646 LIB libspdk_rdma_utils.a 00:02:21.646 LIB libspdk_conf.a 00:02:21.646 SO libspdk_rdma_utils.so.1.0 00:02:21.646 SO libspdk_conf.so.6.0 00:02:21.646 SYMLINK libspdk_rdma_utils.so 00:02:21.646 SYMLINK libspdk_conf.so 00:02:21.646 LIB libspdk_json.a 00:02:21.907 SO libspdk_json.so.6.0 00:02:21.907 SYMLINK libspdk_json.so 00:02:21.907 LIB libspdk_idxd.a 00:02:21.907 SO libspdk_idxd.so.12.1 00:02:21.907 LIB libspdk_vmd.a 00:02:22.168 SO libspdk_vmd.so.6.0 00:02:22.168 CC lib/rdma_provider/common.o 00:02:22.168 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:22.168 SYMLINK libspdk_idxd.so 00:02:22.168 SYMLINK libspdk_vmd.so 00:02:22.168 CC lib/jsonrpc/jsonrpc_server.o 00:02:22.168 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:22.168 CC lib/jsonrpc/jsonrpc_client.o 00:02:22.168 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:22.168 LIB libspdk_rdma_provider.a 00:02:22.430 SO libspdk_rdma_provider.so.7.0 00:02:22.430 SYMLINK libspdk_rdma_provider.so 00:02:22.430 LIB libspdk_jsonrpc.a 00:02:22.430 SO libspdk_jsonrpc.so.6.0 00:02:22.691 SYMLINK libspdk_jsonrpc.so 00:02:22.691 LIB libspdk_env_dpdk.a 00:02:22.691 SO libspdk_env_dpdk.so.15.1 00:02:22.951 SYMLINK libspdk_env_dpdk.so 00:02:22.951 CC lib/rpc/rpc.o 00:02:23.213 LIB libspdk_rpc.a 00:02:23.213 SO libspdk_rpc.so.6.0 00:02:23.213 SYMLINK libspdk_rpc.so 00:02:23.785 CC lib/notify/notify.o 00:02:23.785 CC lib/notify/notify_rpc.o 00:02:23.785 CC lib/trace/trace.o 00:02:23.785 CC lib/trace/trace_flags.o 00:02:23.785 CC lib/keyring/keyring.o 00:02:23.785 CC lib/trace/trace_rpc.o 00:02:23.785 CC lib/keyring/keyring_rpc.o 00:02:23.785 LIB libspdk_notify.a 00:02:23.785 SO libspdk_notify.so.6.0 00:02:23.785 LIB libspdk_keyring.a 00:02:23.785 LIB libspdk_trace.a 00:02:24.046 SYMLINK libspdk_notify.so 00:02:24.046 SO libspdk_keyring.so.2.0 00:02:24.046 SO libspdk_trace.so.11.0 00:02:24.046 SYMLINK libspdk_keyring.so 00:02:24.046 SYMLINK libspdk_trace.so 00:02:24.307 CC lib/thread/thread.o 00:02:24.307 CC lib/thread/iobuf.o 00:02:24.307 CC lib/sock/sock.o 00:02:24.307 CC lib/sock/sock_rpc.o 00:02:24.880 LIB libspdk_sock.a 00:02:24.880 SO libspdk_sock.so.10.0 00:02:24.880 SYMLINK libspdk_sock.so 00:02:25.142 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:25.142 CC lib/nvme/nvme_ctrlr.o 00:02:25.142 CC lib/nvme/nvme_fabric.o 00:02:25.142 CC lib/nvme/nvme_ns_cmd.o 00:02:25.142 CC lib/nvme/nvme_ns.o 00:02:25.142 CC lib/nvme/nvme_pcie_common.o 00:02:25.142 CC lib/nvme/nvme_pcie.o 00:02:25.142 CC lib/nvme/nvme_qpair.o 00:02:25.142 CC lib/nvme/nvme.o 00:02:25.142 CC lib/nvme/nvme_quirks.o 00:02:25.142 CC lib/nvme/nvme_transport.o 00:02:25.142 CC lib/nvme/nvme_discovery.o 00:02:25.142 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.142 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.142 CC lib/nvme/nvme_tcp.o 00:02:25.142 CC lib/nvme/nvme_opal.o 00:02:25.142 CC lib/nvme/nvme_io_msg.o 00:02:25.142 CC lib/nvme/nvme_poll_group.o 00:02:25.142 CC lib/nvme/nvme_zns.o 00:02:25.142 CC lib/nvme/nvme_stubs.o 00:02:25.142 CC lib/nvme/nvme_auth.o 00:02:25.142 CC lib/nvme/nvme_cuse.o 00:02:25.142 CC lib/nvme/nvme_rdma.o 00:02:25.714 LIB libspdk_thread.a 00:02:25.714 SO libspdk_thread.so.11.0 00:02:25.714 SYMLINK libspdk_thread.so 00:02:25.975 CC lib/accel/accel_rpc.o 00:02:25.975 CC lib/accel/accel.o 00:02:25.975 CC lib/accel/accel_sw.o 00:02:25.975 CC lib/blob/blobstore.o 00:02:25.975 CC lib/blob/request.o 00:02:25.975 CC lib/virtio/virtio.o 00:02:25.975 CC lib/blob/zeroes.o 00:02:25.975 CC lib/blob/blob_bs_dev.o 00:02:25.975 CC lib/virtio/virtio_vhost_user.o 00:02:25.975 CC lib/virtio/virtio_vfio_user.o 00:02:25.975 CC lib/virtio/virtio_pci.o 00:02:26.237 CC lib/init/json_config.o 00:02:26.237 CC lib/fsdev/fsdev.o 00:02:26.237 CC lib/fsdev/fsdev_io.o 00:02:26.237 CC lib/init/subsystem.o 00:02:26.237 CC lib/init/subsystem_rpc.o 00:02:26.237 CC lib/fsdev/fsdev_rpc.o 00:02:26.237 CC lib/init/rpc.o 00:02:26.237 LIB libspdk_init.a 00:02:26.498 SO libspdk_init.so.6.0 00:02:26.498 LIB libspdk_virtio.a 00:02:26.498 SYMLINK libspdk_init.so 00:02:26.498 SO libspdk_virtio.so.7.0 00:02:26.498 SYMLINK libspdk_virtio.so 00:02:26.760 LIB libspdk_fsdev.a 00:02:26.760 SO libspdk_fsdev.so.2.0 00:02:26.760 CC lib/event/app.o 00:02:26.760 CC lib/event/reactor.o 00:02:26.760 CC lib/event/log_rpc.o 00:02:26.760 CC lib/event/app_rpc.o 00:02:26.760 CC lib/event/scheduler_static.o 00:02:26.760 SYMLINK libspdk_fsdev.so 00:02:27.021 LIB libspdk_accel.a 00:02:27.021 SO libspdk_accel.so.16.0 00:02:27.021 LIB libspdk_nvme.a 00:02:27.281 SYMLINK libspdk_accel.so 00:02:27.281 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:27.281 LIB libspdk_event.a 00:02:27.281 SO libspdk_nvme.so.15.0 00:02:27.281 SO libspdk_event.so.14.0 00:02:27.281 SYMLINK libspdk_event.so 00:02:27.542 SYMLINK libspdk_nvme.so 00:02:27.542 CC lib/bdev/bdev.o 00:02:27.542 CC lib/bdev/bdev_zone.o 00:02:27.542 CC lib/bdev/bdev_rpc.o 00:02:27.542 CC lib/bdev/part.o 00:02:27.542 CC lib/bdev/scsi_nvme.o 00:02:27.803 LIB libspdk_fuse_dispatcher.a 00:02:27.803 SO libspdk_fuse_dispatcher.so.1.0 00:02:27.803 SYMLINK libspdk_fuse_dispatcher.so 00:02:28.746 LIB libspdk_blob.a 00:02:28.746 SO libspdk_blob.so.11.0 00:02:28.746 SYMLINK libspdk_blob.so 00:02:29.317 CC lib/lvol/lvol.o 00:02:29.317 CC lib/blobfs/blobfs.o 00:02:29.317 CC lib/blobfs/tree.o 00:02:29.890 LIB libspdk_bdev.a 00:02:29.890 SO libspdk_bdev.so.17.0 00:02:29.890 LIB libspdk_blobfs.a 00:02:29.890 SO libspdk_blobfs.so.10.0 00:02:30.152 SYMLINK libspdk_bdev.so 00:02:30.152 LIB libspdk_lvol.a 00:02:30.152 SYMLINK libspdk_blobfs.so 00:02:30.152 SO libspdk_lvol.so.10.0 00:02:30.152 SYMLINK libspdk_lvol.so 00:02:30.413 CC lib/ublk/ublk.o 00:02:30.413 CC lib/ublk/ublk_rpc.o 00:02:30.413 CC lib/scsi/dev.o 00:02:30.413 CC lib/scsi/lun.o 00:02:30.413 CC lib/scsi/port.o 00:02:30.413 CC lib/scsi/scsi.o 00:02:30.413 CC lib/scsi/scsi_bdev.o 00:02:30.413 CC lib/scsi/scsi_pr.o 00:02:30.413 CC lib/scsi/scsi_rpc.o 00:02:30.413 CC lib/scsi/task.o 00:02:30.413 CC lib/nvmf/ctrlr.o 00:02:30.413 CC lib/nvmf/ctrlr_discovery.o 00:02:30.413 CC lib/nvmf/ctrlr_bdev.o 00:02:30.413 CC lib/nvmf/subsystem.o 00:02:30.413 CC lib/nvmf/nvmf.o 00:02:30.413 CC lib/nbd/nbd.o 00:02:30.413 CC lib/nvmf/nvmf_rpc.o 00:02:30.413 CC lib/ftl/ftl_core.o 00:02:30.413 CC lib/nbd/nbd_rpc.o 00:02:30.413 CC lib/nvmf/transport.o 00:02:30.413 CC lib/nvmf/tcp.o 00:02:30.413 CC lib/ftl/ftl_init.o 00:02:30.413 CC lib/nvmf/stubs.o 00:02:30.413 CC lib/ftl/ftl_layout.o 00:02:30.413 CC lib/nvmf/mdns_server.o 00:02:30.413 CC lib/nvmf/rdma.o 00:02:30.413 CC lib/ftl/ftl_io.o 00:02:30.413 CC lib/ftl/ftl_debug.o 00:02:30.413 CC lib/nvmf/auth.o 00:02:30.413 CC lib/ftl/ftl_sb.o 00:02:30.413 CC lib/ftl/ftl_l2p.o 00:02:30.413 CC lib/ftl/ftl_l2p_flat.o 00:02:30.413 CC lib/ftl/ftl_band.o 00:02:30.413 CC lib/ftl/ftl_nv_cache.o 00:02:30.413 CC lib/ftl/ftl_band_ops.o 00:02:30.413 CC lib/ftl/ftl_writer.o 00:02:30.413 CC lib/ftl/ftl_rq.o 00:02:30.413 CC lib/ftl/ftl_reloc.o 00:02:30.413 CC lib/ftl/ftl_l2p_cache.o 00:02:30.413 CC lib/ftl/ftl_p2l.o 00:02:30.413 CC lib/ftl/ftl_p2l_log.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:30.413 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:30.413 CC lib/ftl/utils/ftl_md.o 00:02:30.413 CC lib/ftl/utils/ftl_conf.o 00:02:30.413 CC lib/ftl/utils/ftl_mempool.o 00:02:30.413 CC lib/ftl/utils/ftl_bitmap.o 00:02:30.413 CC lib/ftl/utils/ftl_property.o 00:02:30.413 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:30.413 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:30.413 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:30.413 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:30.413 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:30.413 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:30.413 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:30.413 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:30.413 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:30.413 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:30.413 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:30.413 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:30.413 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:30.413 CC lib/ftl/base/ftl_base_bdev.o 00:02:30.413 CC lib/ftl/base/ftl_base_dev.o 00:02:30.413 CC lib/ftl/ftl_trace.o 00:02:30.982 LIB libspdk_nbd.a 00:02:30.982 LIB libspdk_scsi.a 00:02:30.982 SO libspdk_nbd.so.7.0 00:02:30.982 SO libspdk_scsi.so.9.0 00:02:30.982 SYMLINK libspdk_nbd.so 00:02:30.982 LIB libspdk_ublk.a 00:02:30.982 SYMLINK libspdk_scsi.so 00:02:31.243 SO libspdk_ublk.so.3.0 00:02:31.243 SYMLINK libspdk_ublk.so 00:02:31.504 LIB libspdk_ftl.a 00:02:31.504 CC lib/iscsi/conn.o 00:02:31.504 CC lib/iscsi/init_grp.o 00:02:31.504 CC lib/iscsi/iscsi.o 00:02:31.504 CC lib/iscsi/portal_grp.o 00:02:31.504 CC lib/iscsi/param.o 00:02:31.504 CC lib/iscsi/tgt_node.o 00:02:31.504 CC lib/iscsi/iscsi_subsystem.o 00:02:31.504 CC lib/iscsi/task.o 00:02:31.504 CC lib/iscsi/iscsi_rpc.o 00:02:31.504 CC lib/vhost/vhost.o 00:02:31.504 CC lib/vhost/vhost_rpc.o 00:02:31.504 CC lib/vhost/vhost_scsi.o 00:02:31.504 CC lib/vhost/vhost_blk.o 00:02:31.504 CC lib/vhost/rte_vhost_user.o 00:02:31.504 SO libspdk_ftl.so.9.0 00:02:31.765 SYMLINK libspdk_ftl.so 00:02:32.337 LIB libspdk_nvmf.a 00:02:32.337 LIB libspdk_iscsi.a 00:02:32.337 SO libspdk_nvmf.so.20.0 00:02:32.337 SO libspdk_iscsi.so.8.0 00:02:32.337 LIB libspdk_vhost.a 00:02:32.337 SO libspdk_vhost.so.8.0 00:02:32.337 SYMLINK libspdk_nvmf.so 00:02:32.337 SYMLINK libspdk_iscsi.so 00:02:32.598 SYMLINK libspdk_vhost.so 00:02:33.170 CC module/env_dpdk/env_dpdk_rpc.o 00:02:33.170 CC module/blob/bdev/blob_bdev.o 00:02:33.170 LIB libspdk_env_dpdk_rpc.a 00:02:33.170 CC module/scheduler/gscheduler/gscheduler.o 00:02:33.170 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:33.171 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:33.171 CC module/keyring/linux/keyring.o 00:02:33.171 CC module/sock/posix/posix.o 00:02:33.171 CC module/fsdev/aio/fsdev_aio.o 00:02:33.171 CC module/keyring/linux/keyring_rpc.o 00:02:33.171 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:33.171 CC module/fsdev/aio/linux_aio_mgr.o 00:02:33.171 CC module/keyring/file/keyring.o 00:02:33.171 CC module/keyring/file/keyring_rpc.o 00:02:33.171 CC module/accel/error/accel_error.o 00:02:33.171 CC module/accel/iaa/accel_iaa.o 00:02:33.171 CC module/accel/error/accel_error_rpc.o 00:02:33.171 CC module/accel/ioat/accel_ioat.o 00:02:33.171 CC module/accel/iaa/accel_iaa_rpc.o 00:02:33.171 CC module/accel/dsa/accel_dsa.o 00:02:33.171 CC module/accel/ioat/accel_ioat_rpc.o 00:02:33.171 CC module/accel/dsa/accel_dsa_rpc.o 00:02:33.171 SO libspdk_env_dpdk_rpc.so.6.0 00:02:33.431 SYMLINK libspdk_env_dpdk_rpc.so 00:02:33.431 LIB libspdk_scheduler_dpdk_governor.a 00:02:33.431 LIB libspdk_keyring_linux.a 00:02:33.431 LIB libspdk_keyring_file.a 00:02:33.431 LIB libspdk_scheduler_gscheduler.a 00:02:33.431 LIB libspdk_accel_error.a 00:02:33.431 LIB libspdk_scheduler_dynamic.a 00:02:33.431 SO libspdk_keyring_file.so.2.0 00:02:33.431 LIB libspdk_accel_ioat.a 00:02:33.431 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:33.431 SO libspdk_keyring_linux.so.1.0 00:02:33.431 LIB libspdk_accel_iaa.a 00:02:33.431 SO libspdk_scheduler_gscheduler.so.4.0 00:02:33.431 SO libspdk_accel_error.so.2.0 00:02:33.431 SO libspdk_scheduler_dynamic.so.4.0 00:02:33.431 SO libspdk_accel_ioat.so.6.0 00:02:33.431 SYMLINK libspdk_keyring_linux.so 00:02:33.431 LIB libspdk_blob_bdev.a 00:02:33.431 SO libspdk_accel_iaa.so.3.0 00:02:33.431 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:33.432 SYMLINK libspdk_keyring_file.so 00:02:33.432 SYMLINK libspdk_scheduler_dynamic.so 00:02:33.432 SYMLINK libspdk_scheduler_gscheduler.so 00:02:33.432 SYMLINK libspdk_accel_error.so 00:02:33.432 LIB libspdk_accel_dsa.a 00:02:33.432 SO libspdk_blob_bdev.so.11.0 00:02:33.432 SYMLINK libspdk_accel_ioat.so 00:02:33.432 SYMLINK libspdk_accel_iaa.so 00:02:33.694 SO libspdk_accel_dsa.so.5.0 00:02:33.694 SYMLINK libspdk_blob_bdev.so 00:02:33.694 SYMLINK libspdk_accel_dsa.so 00:02:33.955 LIB libspdk_fsdev_aio.a 00:02:33.955 SO libspdk_fsdev_aio.so.1.0 00:02:33.955 LIB libspdk_sock_posix.a 00:02:33.955 SYMLINK libspdk_fsdev_aio.so 00:02:33.955 SO libspdk_sock_posix.so.6.0 00:02:33.955 SYMLINK libspdk_sock_posix.so 00:02:34.215 CC module/bdev/delay/vbdev_delay.o 00:02:34.215 CC module/bdev/null/bdev_null_rpc.o 00:02:34.215 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:34.215 CC module/bdev/null/bdev_null.o 00:02:34.215 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:34.215 CC module/bdev/lvol/vbdev_lvol.o 00:02:34.215 CC module/bdev/gpt/vbdev_gpt.o 00:02:34.215 CC module/bdev/gpt/gpt.o 00:02:34.215 CC module/bdev/passthru/vbdev_passthru.o 00:02:34.215 CC module/blobfs/bdev/blobfs_bdev.o 00:02:34.215 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:34.215 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:34.215 CC module/bdev/ftl/bdev_ftl.o 00:02:34.215 CC module/bdev/error/vbdev_error.o 00:02:34.215 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:34.215 CC module/bdev/error/vbdev_error_rpc.o 00:02:34.215 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:34.215 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:34.215 CC module/bdev/aio/bdev_aio.o 00:02:34.215 CC module/bdev/aio/bdev_aio_rpc.o 00:02:34.215 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:34.215 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:34.215 CC module/bdev/malloc/bdev_malloc.o 00:02:34.215 CC module/bdev/iscsi/bdev_iscsi.o 00:02:34.215 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:34.215 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:34.215 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:34.215 CC module/bdev/nvme/bdev_nvme.o 00:02:34.215 CC module/bdev/raid/bdev_raid.o 00:02:34.215 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:34.215 CC module/bdev/raid/bdev_raid_rpc.o 00:02:34.215 CC module/bdev/nvme/nvme_rpc.o 00:02:34.215 CC module/bdev/raid/raid0.o 00:02:34.215 CC module/bdev/nvme/bdev_mdns_client.o 00:02:34.215 CC module/bdev/raid/bdev_raid_sb.o 00:02:34.215 CC module/bdev/nvme/vbdev_opal.o 00:02:34.215 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:34.215 CC module/bdev/raid/raid1.o 00:02:34.215 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:34.215 CC module/bdev/split/vbdev_split.o 00:02:34.215 CC module/bdev/raid/concat.o 00:02:34.215 CC module/bdev/split/vbdev_split_rpc.o 00:02:34.215 LIB libspdk_blobfs_bdev.a 00:02:34.476 SO libspdk_blobfs_bdev.so.6.0 00:02:34.476 LIB libspdk_bdev_gpt.a 00:02:34.476 LIB libspdk_bdev_split.a 00:02:34.476 LIB libspdk_bdev_null.a 00:02:34.476 SYMLINK libspdk_blobfs_bdev.so 00:02:34.476 LIB libspdk_bdev_passthru.a 00:02:34.476 SO libspdk_bdev_split.so.6.0 00:02:34.476 LIB libspdk_bdev_error.a 00:02:34.476 LIB libspdk_bdev_ftl.a 00:02:34.476 SO libspdk_bdev_null.so.6.0 00:02:34.476 SO libspdk_bdev_gpt.so.6.0 00:02:34.476 SO libspdk_bdev_passthru.so.6.0 00:02:34.476 SO libspdk_bdev_error.so.6.0 00:02:34.476 SO libspdk_bdev_ftl.so.6.0 00:02:34.476 LIB libspdk_bdev_aio.a 00:02:34.476 LIB libspdk_bdev_zone_block.a 00:02:34.476 SYMLINK libspdk_bdev_split.so 00:02:34.476 LIB libspdk_bdev_iscsi.a 00:02:34.476 SYMLINK libspdk_bdev_passthru.so 00:02:34.476 LIB libspdk_bdev_delay.a 00:02:34.476 SYMLINK libspdk_bdev_null.so 00:02:34.476 SYMLINK libspdk_bdev_gpt.so 00:02:34.476 LIB libspdk_bdev_malloc.a 00:02:34.476 SO libspdk_bdev_aio.so.6.0 00:02:34.476 SO libspdk_bdev_iscsi.so.6.0 00:02:34.476 SYMLINK libspdk_bdev_ftl.so 00:02:34.476 SO libspdk_bdev_zone_block.so.6.0 00:02:34.476 SO libspdk_bdev_malloc.so.6.0 00:02:34.476 SYMLINK libspdk_bdev_error.so 00:02:34.476 SO libspdk_bdev_delay.so.6.0 00:02:34.476 SYMLINK libspdk_bdev_aio.so 00:02:34.737 SYMLINK libspdk_bdev_iscsi.so 00:02:34.737 SYMLINK libspdk_bdev_zone_block.so 00:02:34.737 SYMLINK libspdk_bdev_malloc.so 00:02:34.737 LIB libspdk_bdev_lvol.a 00:02:34.737 SYMLINK libspdk_bdev_delay.so 00:02:34.737 LIB libspdk_bdev_virtio.a 00:02:34.737 SO libspdk_bdev_lvol.so.6.0 00:02:34.737 SO libspdk_bdev_virtio.so.6.0 00:02:34.737 SYMLINK libspdk_bdev_lvol.so 00:02:34.737 SYMLINK libspdk_bdev_virtio.so 00:02:34.999 LIB libspdk_bdev_raid.a 00:02:34.999 SO libspdk_bdev_raid.so.6.0 00:02:35.260 SYMLINK libspdk_bdev_raid.so 00:02:36.646 LIB libspdk_bdev_nvme.a 00:02:36.646 SO libspdk_bdev_nvme.so.7.1 00:02:36.646 SYMLINK libspdk_bdev_nvme.so 00:02:37.220 CC module/event/subsystems/sock/sock.o 00:02:37.220 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:37.220 CC module/event/subsystems/iobuf/iobuf.o 00:02:37.220 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:37.220 CC module/event/subsystems/scheduler/scheduler.o 00:02:37.220 CC module/event/subsystems/vmd/vmd.o 00:02:37.220 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:37.220 CC module/event/subsystems/keyring/keyring.o 00:02:37.220 CC module/event/subsystems/fsdev/fsdev.o 00:02:37.481 LIB libspdk_event_vhost_blk.a 00:02:37.481 LIB libspdk_event_sock.a 00:02:37.481 LIB libspdk_event_scheduler.a 00:02:37.481 LIB libspdk_event_iobuf.a 00:02:37.481 SO libspdk_event_scheduler.so.4.0 00:02:37.481 LIB libspdk_event_keyring.a 00:02:37.481 SO libspdk_event_sock.so.5.0 00:02:37.481 SO libspdk_event_vhost_blk.so.3.0 00:02:37.481 LIB libspdk_event_vmd.a 00:02:37.481 LIB libspdk_event_fsdev.a 00:02:37.481 SO libspdk_event_iobuf.so.3.0 00:02:37.481 SO libspdk_event_keyring.so.1.0 00:02:37.481 SO libspdk_event_fsdev.so.1.0 00:02:37.481 SO libspdk_event_vmd.so.6.0 00:02:37.481 SYMLINK libspdk_event_sock.so 00:02:37.481 SYMLINK libspdk_event_scheduler.so 00:02:37.481 SYMLINK libspdk_event_vhost_blk.so 00:02:37.481 SYMLINK libspdk_event_iobuf.so 00:02:37.481 SYMLINK libspdk_event_keyring.so 00:02:37.481 SYMLINK libspdk_event_fsdev.so 00:02:37.481 SYMLINK libspdk_event_vmd.so 00:02:37.743 CC module/event/subsystems/accel/accel.o 00:02:38.004 LIB libspdk_event_accel.a 00:02:38.004 SO libspdk_event_accel.so.6.0 00:02:38.004 SYMLINK libspdk_event_accel.so 00:02:38.575 CC module/event/subsystems/bdev/bdev.o 00:02:38.575 LIB libspdk_event_bdev.a 00:02:38.575 SO libspdk_event_bdev.so.6.0 00:02:38.867 SYMLINK libspdk_event_bdev.so 00:02:39.129 CC module/event/subsystems/scsi/scsi.o 00:02:39.129 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:39.129 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:39.129 CC module/event/subsystems/nbd/nbd.o 00:02:39.129 CC module/event/subsystems/ublk/ublk.o 00:02:39.129 LIB libspdk_event_scsi.a 00:02:39.129 LIB libspdk_event_nbd.a 00:02:39.129 LIB libspdk_event_ublk.a 00:02:39.129 SO libspdk_event_scsi.so.6.0 00:02:39.129 SO libspdk_event_nbd.so.6.0 00:02:39.390 SO libspdk_event_ublk.so.3.0 00:02:39.390 LIB libspdk_event_nvmf.a 00:02:39.390 SYMLINK libspdk_event_scsi.so 00:02:39.390 SYMLINK libspdk_event_nbd.so 00:02:39.390 SO libspdk_event_nvmf.so.6.0 00:02:39.390 SYMLINK libspdk_event_ublk.so 00:02:39.390 SYMLINK libspdk_event_nvmf.so 00:02:39.651 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:39.651 CC module/event/subsystems/iscsi/iscsi.o 00:02:39.913 LIB libspdk_event_vhost_scsi.a 00:02:39.913 LIB libspdk_event_iscsi.a 00:02:39.913 SO libspdk_event_vhost_scsi.so.3.0 00:02:39.913 SO libspdk_event_iscsi.so.6.0 00:02:39.913 SYMLINK libspdk_event_vhost_scsi.so 00:02:39.913 SYMLINK libspdk_event_iscsi.so 00:02:40.174 SO libspdk.so.6.0 00:02:40.174 SYMLINK libspdk.so 00:02:40.437 TEST_HEADER include/spdk/accel.h 00:02:40.437 CXX app/trace/trace.o 00:02:40.437 TEST_HEADER include/spdk/accel_module.h 00:02:40.437 CC test/rpc_client/rpc_client_test.o 00:02:40.437 TEST_HEADER include/spdk/assert.h 00:02:40.437 TEST_HEADER include/spdk/barrier.h 00:02:40.437 TEST_HEADER include/spdk/base64.h 00:02:40.437 CC app/trace_record/trace_record.o 00:02:40.437 TEST_HEADER include/spdk/bdev.h 00:02:40.437 TEST_HEADER include/spdk/bdev_module.h 00:02:40.437 CC app/spdk_lspci/spdk_lspci.o 00:02:40.437 TEST_HEADER include/spdk/bdev_zone.h 00:02:40.437 TEST_HEADER include/spdk/bit_array.h 00:02:40.437 TEST_HEADER include/spdk/bit_pool.h 00:02:40.437 TEST_HEADER include/spdk/blob_bdev.h 00:02:40.437 CC app/spdk_nvme_perf/perf.o 00:02:40.437 CC app/spdk_top/spdk_top.o 00:02:40.437 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.437 TEST_HEADER include/spdk/blobfs.h 00:02:40.437 TEST_HEADER include/spdk/blob.h 00:02:40.437 TEST_HEADER include/spdk/conf.h 00:02:40.437 TEST_HEADER include/spdk/config.h 00:02:40.437 CC app/spdk_nvme_discover/discovery_aer.o 00:02:40.437 CC app/spdk_nvme_identify/identify.o 00:02:40.437 TEST_HEADER include/spdk/cpuset.h 00:02:40.437 TEST_HEADER include/spdk/crc16.h 00:02:40.437 TEST_HEADER include/spdk/crc32.h 00:02:40.437 TEST_HEADER include/spdk/crc64.h 00:02:40.437 TEST_HEADER include/spdk/dif.h 00:02:40.437 TEST_HEADER include/spdk/dma.h 00:02:40.437 TEST_HEADER include/spdk/endian.h 00:02:40.437 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.437 TEST_HEADER include/spdk/env.h 00:02:40.437 TEST_HEADER include/spdk/event.h 00:02:40.437 TEST_HEADER include/spdk/fd_group.h 00:02:40.437 TEST_HEADER include/spdk/fd.h 00:02:40.437 TEST_HEADER include/spdk/file.h 00:02:40.437 TEST_HEADER include/spdk/fsdev.h 00:02:40.437 TEST_HEADER include/spdk/fsdev_module.h 00:02:40.698 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:40.698 TEST_HEADER include/spdk/ftl.h 00:02:40.698 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:40.698 TEST_HEADER include/spdk/gpt_spec.h 00:02:40.698 TEST_HEADER include/spdk/hexlify.h 00:02:40.698 TEST_HEADER include/spdk/histogram_data.h 00:02:40.698 TEST_HEADER include/spdk/idxd.h 00:02:40.698 TEST_HEADER include/spdk/idxd_spec.h 00:02:40.698 TEST_HEADER include/spdk/init.h 00:02:40.698 CC app/nvmf_tgt/nvmf_main.o 00:02:40.698 CC app/spdk_dd/spdk_dd.o 00:02:40.698 TEST_HEADER include/spdk/ioat.h 00:02:40.698 TEST_HEADER include/spdk/iscsi_spec.h 00:02:40.698 TEST_HEADER include/spdk/ioat_spec.h 00:02:40.698 TEST_HEADER include/spdk/json.h 00:02:40.698 TEST_HEADER include/spdk/jsonrpc.h 00:02:40.698 TEST_HEADER include/spdk/keyring.h 00:02:40.698 TEST_HEADER include/spdk/likely.h 00:02:40.698 TEST_HEADER include/spdk/keyring_module.h 00:02:40.698 TEST_HEADER include/spdk/log.h 00:02:40.698 TEST_HEADER include/spdk/lvol.h 00:02:40.698 TEST_HEADER include/spdk/md5.h 00:02:40.698 TEST_HEADER include/spdk/memory.h 00:02:40.698 TEST_HEADER include/spdk/mmio.h 00:02:40.698 TEST_HEADER include/spdk/nbd.h 00:02:40.698 TEST_HEADER include/spdk/net.h 00:02:40.698 TEST_HEADER include/spdk/notify.h 00:02:40.698 TEST_HEADER include/spdk/nvme.h 00:02:40.698 CC app/iscsi_tgt/iscsi_tgt.o 00:02:40.698 TEST_HEADER include/spdk/nvme_intel.h 00:02:40.698 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:40.698 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:40.698 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.698 TEST_HEADER include/spdk/nvme_spec.h 00:02:40.698 CC app/spdk_tgt/spdk_tgt.o 00:02:40.698 TEST_HEADER include/spdk/nvmf.h 00:02:40.698 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:40.698 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.698 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.698 TEST_HEADER include/spdk/nvmf_transport.h 00:02:40.698 TEST_HEADER include/spdk/opal_spec.h 00:02:40.698 TEST_HEADER include/spdk/opal.h 00:02:40.698 TEST_HEADER include/spdk/pci_ids.h 00:02:40.698 TEST_HEADER include/spdk/pipe.h 00:02:40.698 TEST_HEADER include/spdk/queue.h 00:02:40.698 TEST_HEADER include/spdk/reduce.h 00:02:40.698 TEST_HEADER include/spdk/rpc.h 00:02:40.698 TEST_HEADER include/spdk/scheduler.h 00:02:40.698 TEST_HEADER include/spdk/scsi.h 00:02:40.698 TEST_HEADER include/spdk/scsi_spec.h 00:02:40.698 TEST_HEADER include/spdk/sock.h 00:02:40.698 TEST_HEADER include/spdk/stdinc.h 00:02:40.698 TEST_HEADER include/spdk/string.h 00:02:40.698 TEST_HEADER include/spdk/trace.h 00:02:40.698 TEST_HEADER include/spdk/thread.h 00:02:40.698 TEST_HEADER include/spdk/tree.h 00:02:40.698 TEST_HEADER include/spdk/trace_parser.h 00:02:40.698 TEST_HEADER include/spdk/ublk.h 00:02:40.698 TEST_HEADER include/spdk/util.h 00:02:40.698 TEST_HEADER include/spdk/uuid.h 00:02:40.698 TEST_HEADER include/spdk/version.h 00:02:40.699 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:40.699 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:40.699 TEST_HEADER include/spdk/vhost.h 00:02:40.699 TEST_HEADER include/spdk/vmd.h 00:02:40.699 TEST_HEADER include/spdk/xor.h 00:02:40.699 TEST_HEADER include/spdk/zipf.h 00:02:40.699 CXX test/cpp_headers/accel.o 00:02:40.699 CXX test/cpp_headers/accel_module.o 00:02:40.699 CXX test/cpp_headers/assert.o 00:02:40.699 CXX test/cpp_headers/barrier.o 00:02:40.699 CXX test/cpp_headers/base64.o 00:02:40.699 CXX test/cpp_headers/bdev_module.o 00:02:40.699 CXX test/cpp_headers/bdev_zone.o 00:02:40.699 CXX test/cpp_headers/bdev.o 00:02:40.699 CXX test/cpp_headers/bit_array.o 00:02:40.699 CXX test/cpp_headers/bit_pool.o 00:02:40.699 CXX test/cpp_headers/blob_bdev.o 00:02:40.699 CXX test/cpp_headers/blobfs_bdev.o 00:02:40.699 CXX test/cpp_headers/blobfs.o 00:02:40.699 CXX test/cpp_headers/blob.o 00:02:40.699 CXX test/cpp_headers/conf.o 00:02:40.699 CXX test/cpp_headers/config.o 00:02:40.699 CXX test/cpp_headers/cpuset.o 00:02:40.699 CXX test/cpp_headers/crc16.o 00:02:40.699 CXX test/cpp_headers/crc32.o 00:02:40.699 CXX test/cpp_headers/crc64.o 00:02:40.699 CXX test/cpp_headers/dif.o 00:02:40.699 CXX test/cpp_headers/endian.o 00:02:40.699 CXX test/cpp_headers/env_dpdk.o 00:02:40.699 CXX test/cpp_headers/dma.o 00:02:40.699 CXX test/cpp_headers/env.o 00:02:40.699 CXX test/cpp_headers/event.o 00:02:40.699 CXX test/cpp_headers/fd_group.o 00:02:40.699 CXX test/cpp_headers/fd.o 00:02:40.699 CXX test/cpp_headers/file.o 00:02:40.699 CXX test/cpp_headers/fsdev.o 00:02:40.699 CXX test/cpp_headers/fsdev_module.o 00:02:40.699 CXX test/cpp_headers/ftl.o 00:02:40.699 CXX test/cpp_headers/fuse_dispatcher.o 00:02:40.699 CXX test/cpp_headers/gpt_spec.o 00:02:40.699 CXX test/cpp_headers/histogram_data.o 00:02:40.699 CXX test/cpp_headers/hexlify.o 00:02:40.699 CXX test/cpp_headers/idxd.o 00:02:40.699 CXX test/cpp_headers/idxd_spec.o 00:02:40.699 CXX test/cpp_headers/init.o 00:02:40.699 CXX test/cpp_headers/ioat_spec.o 00:02:40.699 CXX test/cpp_headers/ioat.o 00:02:40.699 CXX test/cpp_headers/iscsi_spec.o 00:02:40.699 CXX test/cpp_headers/json.o 00:02:40.699 CXX test/cpp_headers/jsonrpc.o 00:02:40.699 CXX test/cpp_headers/log.o 00:02:40.699 CXX test/cpp_headers/md5.o 00:02:40.699 CXX test/cpp_headers/likely.o 00:02:40.699 CXX test/cpp_headers/keyring_module.o 00:02:40.699 CXX test/cpp_headers/keyring.o 00:02:40.699 CXX test/cpp_headers/mmio.o 00:02:40.699 CXX test/cpp_headers/lvol.o 00:02:40.699 CXX test/cpp_headers/memory.o 00:02:40.699 CXX test/cpp_headers/nbd.o 00:02:40.699 CXX test/cpp_headers/notify.o 00:02:40.699 CXX test/cpp_headers/net.o 00:02:40.699 CC examples/util/zipf/zipf.o 00:02:40.699 CXX test/cpp_headers/nvme_intel.o 00:02:40.699 CXX test/cpp_headers/nvme.o 00:02:40.699 CXX test/cpp_headers/nvme_ocssd.o 00:02:40.699 CXX test/cpp_headers/nvme_spec.o 00:02:40.699 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:40.699 CXX test/cpp_headers/nvme_zns.o 00:02:40.699 CXX test/cpp_headers/nvmf_spec.o 00:02:40.699 CXX test/cpp_headers/nvmf_cmd.o 00:02:40.699 CXX test/cpp_headers/nvmf_transport.o 00:02:40.699 CXX test/cpp_headers/nvmf.o 00:02:40.699 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:40.699 CXX test/cpp_headers/opal_spec.o 00:02:40.699 CXX test/cpp_headers/opal.o 00:02:40.699 CXX test/cpp_headers/pipe.o 00:02:40.699 CXX test/cpp_headers/pci_ids.o 00:02:40.699 CXX test/cpp_headers/queue.o 00:02:40.699 LINK spdk_lspci 00:02:40.699 CXX test/cpp_headers/rpc.o 00:02:40.699 CC examples/ioat/perf/perf.o 00:02:40.699 CXX test/cpp_headers/reduce.o 00:02:40.699 CXX test/cpp_headers/scheduler.o 00:02:40.699 CXX test/cpp_headers/scsi.o 00:02:40.699 CC test/dma/test_dma/test_dma.o 00:02:40.699 CC test/env/vtophys/vtophys.o 00:02:40.699 CXX test/cpp_headers/stdinc.o 00:02:40.699 CXX test/cpp_headers/thread.o 00:02:40.699 CXX test/cpp_headers/scsi_spec.o 00:02:40.699 CXX test/cpp_headers/sock.o 00:02:40.699 CC test/env/memory/memory_ut.o 00:02:40.699 CXX test/cpp_headers/string.o 00:02:40.699 CC test/thread/poller_perf/poller_perf.o 00:02:40.699 CXX test/cpp_headers/trace_parser.o 00:02:40.699 CC examples/ioat/verify/verify.o 00:02:40.699 CXX test/cpp_headers/trace.o 00:02:40.699 CXX test/cpp_headers/tree.o 00:02:40.699 CXX test/cpp_headers/uuid.o 00:02:40.699 CXX test/cpp_headers/ublk.o 00:02:40.699 CXX test/cpp_headers/util.o 00:02:40.699 CXX test/cpp_headers/vfio_user_pci.o 00:02:40.699 CXX test/cpp_headers/version.o 00:02:40.699 CXX test/cpp_headers/vfio_user_spec.o 00:02:40.699 CXX test/cpp_headers/vmd.o 00:02:40.699 CXX test/cpp_headers/vhost.o 00:02:40.699 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:40.699 CXX test/cpp_headers/zipf.o 00:02:40.699 CC test/app/stub/stub.o 00:02:40.699 CXX test/cpp_headers/xor.o 00:02:40.699 CC test/app/jsoncat/jsoncat.o 00:02:40.699 CC app/fio/nvme/fio_plugin.o 00:02:40.699 CC test/app/histogram_perf/histogram_perf.o 00:02:40.699 CC test/env/pci/pci_ut.o 00:02:40.699 LINK rpc_client_test 00:02:40.699 CC test/app/bdev_svc/bdev_svc.o 00:02:40.959 CC app/fio/bdev/fio_plugin.o 00:02:40.959 LINK spdk_nvme_discover 00:02:40.959 LINK interrupt_tgt 00:02:40.959 LINK spdk_trace_record 00:02:40.959 LINK spdk_tgt 00:02:40.959 LINK nvmf_tgt 00:02:41.218 LINK iscsi_tgt 00:02:41.218 CC test/env/mem_callbacks/mem_callbacks.o 00:02:41.218 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:41.218 LINK spdk_dd 00:02:41.218 LINK zipf 00:02:41.218 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:41.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:41.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:41.218 LINK spdk_trace 00:02:41.218 LINK poller_perf 00:02:41.478 LINK vtophys 00:02:41.478 LINK jsoncat 00:02:41.478 LINK histogram_perf 00:02:41.478 LINK stub 00:02:41.478 LINK env_dpdk_post_init 00:02:41.478 LINK verify 00:02:41.478 LINK bdev_svc 00:02:41.478 LINK ioat_perf 00:02:41.478 LINK test_dma 00:02:41.739 LINK spdk_nvme_perf 00:02:41.739 CC app/vhost/vhost.o 00:02:41.739 LINK spdk_nvme_identify 00:02:41.739 LINK spdk_top 00:02:41.739 LINK nvme_fuzz 00:02:41.739 LINK vhost_fuzz 00:02:41.739 LINK spdk_bdev 00:02:41.739 CC examples/sock/hello_world/hello_sock.o 00:02:41.739 LINK pci_ut 00:02:41.739 CC examples/vmd/led/led.o 00:02:41.739 CC examples/vmd/lsvmd/lsvmd.o 00:02:41.739 CC examples/idxd/perf/perf.o 00:02:41.739 LINK spdk_nvme 00:02:41.739 CC examples/thread/thread/thread_ex.o 00:02:41.739 CC test/event/reactor_perf/reactor_perf.o 00:02:41.739 CC test/event/reactor/reactor.o 00:02:41.739 CC test/event/event_perf/event_perf.o 00:02:41.739 CC test/event/app_repeat/app_repeat.o 00:02:41.739 CC test/event/scheduler/scheduler.o 00:02:41.999 LINK led 00:02:41.999 LINK vhost 00:02:41.999 LINK lsvmd 00:02:41.999 LINK mem_callbacks 00:02:41.999 LINK reactor_perf 00:02:41.999 LINK reactor 00:02:41.999 LINK hello_sock 00:02:41.999 LINK event_perf 00:02:41.999 CC test/nvme/reset/reset.o 00:02:41.999 CC test/nvme/fused_ordering/fused_ordering.o 00:02:41.999 CC test/nvme/connect_stress/connect_stress.o 00:02:41.999 CC test/nvme/overhead/overhead.o 00:02:41.999 CC test/nvme/aer/aer.o 00:02:41.999 CC test/nvme/sgl/sgl.o 00:02:41.999 CC test/nvme/err_injection/err_injection.o 00:02:41.999 CC test/nvme/reserve/reserve.o 00:02:41.999 CC test/nvme/simple_copy/simple_copy.o 00:02:41.999 CC test/nvme/fdp/fdp.o 00:02:41.999 LINK app_repeat 00:02:41.999 CC test/nvme/compliance/nvme_compliance.o 00:02:41.999 CC test/nvme/cuse/cuse.o 00:02:41.999 CC test/nvme/startup/startup.o 00:02:41.999 LINK idxd_perf 00:02:41.999 CC test/nvme/boot_partition/boot_partition.o 00:02:41.999 CC test/nvme/e2edp/nvme_dp.o 00:02:41.999 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:41.999 LINK thread 00:02:41.999 LINK scheduler 00:02:41.999 CC test/blobfs/mkfs/mkfs.o 00:02:42.264 CC test/accel/dif/dif.o 00:02:42.264 CC test/lvol/esnap/esnap.o 00:02:42.264 LINK connect_stress 00:02:42.264 LINK startup 00:02:42.264 LINK boot_partition 00:02:42.264 LINK err_injection 00:02:42.264 LINK reserve 00:02:42.264 LINK fused_ordering 00:02:42.264 LINK doorbell_aers 00:02:42.264 LINK simple_copy 00:02:42.264 LINK reset 00:02:42.264 LINK mkfs 00:02:42.264 LINK sgl 00:02:42.264 LINK nvme_dp 00:02:42.264 LINK nvme_compliance 00:02:42.264 LINK aer 00:02:42.264 LINK memory_ut 00:02:42.264 LINK overhead 00:02:42.595 LINK fdp 00:02:42.595 CC examples/nvme/hello_world/hello_world.o 00:02:42.595 CC examples/nvme/arbitration/arbitration.o 00:02:42.595 CC examples/nvme/reconnect/reconnect.o 00:02:42.595 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:42.595 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:42.595 CC examples/nvme/hotplug/hotplug.o 00:02:42.595 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:42.595 CC examples/nvme/abort/abort.o 00:02:42.595 CC examples/accel/perf/accel_perf.o 00:02:42.595 CC examples/blob/cli/blobcli.o 00:02:42.595 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:42.595 CC examples/blob/hello_world/hello_blob.o 00:02:42.595 LINK cmb_copy 00:02:42.595 LINK pmr_persistence 00:02:42.595 LINK hello_world 00:02:42.886 LINK hotplug 00:02:42.886 LINK iscsi_fuzz 00:02:42.886 LINK dif 00:02:42.886 LINK arbitration 00:02:42.886 LINK reconnect 00:02:42.886 LINK abort 00:02:42.887 LINK nvme_manage 00:02:42.887 LINK hello_blob 00:02:42.887 LINK hello_fsdev 00:02:43.149 LINK accel_perf 00:02:43.149 LINK blobcli 00:02:43.149 LINK cuse 00:02:43.410 CC test/bdev/bdevio/bdevio.o 00:02:43.673 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.673 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.673 LINK bdevio 00:02:43.934 LINK hello_bdev 00:02:44.507 LINK bdevperf 00:02:45.079 CC examples/nvmf/nvmf/nvmf.o 00:02:45.341 LINK nvmf 00:02:46.726 LINK esnap 00:02:46.988 00:02:46.988 real 0m53.207s 00:02:46.988 user 7m37.479s 00:02:46.989 sys 4m13.001s 00:02:46.989 17:48:05 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:46.989 17:48:05 make -- common/autotest_common.sh@10 -- $ set +x 00:02:46.989 ************************************ 00:02:46.989 END TEST make 00:02:46.989 ************************************ 00:02:46.989 17:48:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:46.989 17:48:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:46.989 17:48:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:46.989 17:48:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.989 17:48:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:46.989 17:48:05 -- pm/common@44 -- $ pid=3234685 00:02:46.989 17:48:05 -- pm/common@50 -- $ kill -TERM 3234685 00:02:46.989 17:48:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.989 17:48:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:46.989 17:48:05 -- pm/common@44 -- $ pid=3234686 00:02:46.989 17:48:05 -- pm/common@50 -- $ kill -TERM 3234686 00:02:46.989 17:48:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.989 17:48:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:46.989 17:48:05 -- pm/common@44 -- $ pid=3234687 00:02:46.989 17:48:05 -- pm/common@50 -- $ kill -TERM 3234687 00:02:46.989 17:48:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.989 17:48:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:46.989 17:48:05 -- pm/common@44 -- $ pid=3234712 00:02:46.989 17:48:05 -- pm/common@50 -- $ sudo -E kill -TERM 3234712 00:02:46.989 17:48:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:46.989 17:48:05 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:46.989 17:48:05 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:46.989 17:48:05 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:46.989 17:48:05 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:47.251 17:48:05 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:47.251 17:48:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:47.251 17:48:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:47.251 17:48:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:47.252 17:48:05 -- scripts/common.sh@336 -- # IFS=.-: 00:02:47.252 17:48:05 -- scripts/common.sh@336 -- # read -ra ver1 00:02:47.252 17:48:05 -- scripts/common.sh@337 -- # IFS=.-: 00:02:47.252 17:48:05 -- scripts/common.sh@337 -- # read -ra ver2 00:02:47.252 17:48:05 -- scripts/common.sh@338 -- # local 'op=<' 00:02:47.252 17:48:05 -- scripts/common.sh@340 -- # ver1_l=2 00:02:47.252 17:48:05 -- scripts/common.sh@341 -- # ver2_l=1 00:02:47.252 17:48:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:47.252 17:48:05 -- scripts/common.sh@344 -- # case "$op" in 00:02:47.252 17:48:05 -- scripts/common.sh@345 -- # : 1 00:02:47.252 17:48:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:47.252 17:48:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:47.252 17:48:05 -- scripts/common.sh@365 -- # decimal 1 00:02:47.252 17:48:05 -- scripts/common.sh@353 -- # local d=1 00:02:47.252 17:48:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:47.252 17:48:05 -- scripts/common.sh@355 -- # echo 1 00:02:47.252 17:48:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:47.252 17:48:05 -- scripts/common.sh@366 -- # decimal 2 00:02:47.252 17:48:05 -- scripts/common.sh@353 -- # local d=2 00:02:47.252 17:48:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:47.252 17:48:05 -- scripts/common.sh@355 -- # echo 2 00:02:47.252 17:48:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:47.252 17:48:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:47.252 17:48:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:47.252 17:48:05 -- scripts/common.sh@368 -- # return 0 00:02:47.252 17:48:05 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:47.252 17:48:05 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:47.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.252 --rc genhtml_branch_coverage=1 00:02:47.252 --rc genhtml_function_coverage=1 00:02:47.252 --rc genhtml_legend=1 00:02:47.252 --rc geninfo_all_blocks=1 00:02:47.252 --rc geninfo_unexecuted_blocks=1 00:02:47.252 00:02:47.252 ' 00:02:47.252 17:48:05 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:47.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.252 --rc genhtml_branch_coverage=1 00:02:47.252 --rc genhtml_function_coverage=1 00:02:47.252 --rc genhtml_legend=1 00:02:47.252 --rc geninfo_all_blocks=1 00:02:47.252 --rc geninfo_unexecuted_blocks=1 00:02:47.252 00:02:47.252 ' 00:02:47.252 17:48:05 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:47.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.252 --rc genhtml_branch_coverage=1 00:02:47.252 --rc genhtml_function_coverage=1 00:02:47.252 --rc genhtml_legend=1 00:02:47.252 --rc geninfo_all_blocks=1 00:02:47.252 --rc geninfo_unexecuted_blocks=1 00:02:47.252 00:02:47.252 ' 00:02:47.252 17:48:05 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:47.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.252 --rc genhtml_branch_coverage=1 00:02:47.252 --rc genhtml_function_coverage=1 00:02:47.252 --rc genhtml_legend=1 00:02:47.252 --rc geninfo_all_blocks=1 00:02:47.252 --rc geninfo_unexecuted_blocks=1 00:02:47.252 00:02:47.252 ' 00:02:47.252 17:48:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:47.252 17:48:05 -- nvmf/common.sh@7 -- # uname -s 00:02:47.252 17:48:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:47.252 17:48:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:47.252 17:48:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:47.252 17:48:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:47.252 17:48:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:47.252 17:48:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:47.252 17:48:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:47.252 17:48:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:47.252 17:48:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.252 17:48:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.252 17:48:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:47.252 17:48:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:47.252 17:48:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.252 17:48:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.252 17:48:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:47.252 17:48:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:47.252 17:48:05 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:47.252 17:48:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:47.252 17:48:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.252 17:48:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.252 17:48:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.252 17:48:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.252 17:48:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.252 17:48:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.252 17:48:05 -- paths/export.sh@5 -- # export PATH 00:02:47.252 17:48:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.252 17:48:05 -- nvmf/common.sh@51 -- # : 0 00:02:47.252 17:48:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:47.252 17:48:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:47.252 17:48:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:47.252 17:48:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.252 17:48:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.252 17:48:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:47.252 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:47.252 17:48:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:47.252 17:48:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:47.252 17:48:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:47.252 17:48:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.252 17:48:05 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.252 17:48:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.252 17:48:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.252 17:48:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:47.252 17:48:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.252 17:48:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:47.252 17:48:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.252 17:48:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.252 17:48:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.252 17:48:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.252 17:48:05 -- spdk/autotest.sh@48 -- # udevadm_pid=3299157 00:02:47.252 17:48:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:47.252 17:48:05 -- pm/common@17 -- # local monitor 00:02:47.252 17:48:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.252 17:48:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.252 17:48:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.252 17:48:05 -- pm/common@21 -- # date +%s 00:02:47.252 17:48:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.252 17:48:05 -- pm/common@25 -- # sleep 1 00:02:47.252 17:48:05 -- pm/common@21 -- # date +%s 00:02:47.252 17:48:05 -- pm/common@21 -- # date +%s 00:02:47.252 17:48:05 -- pm/common@21 -- # date +%s 00:02:47.252 17:48:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732121285 00:02:47.252 17:48:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732121285 00:02:47.252 17:48:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732121285 00:02:47.252 17:48:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732121285 00:02:47.252 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732121285_collect-cpu-load.pm.log 00:02:47.252 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732121285_collect-vmstat.pm.log 00:02:47.252 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732121285_collect-cpu-temp.pm.log 00:02:47.252 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732121285_collect-bmc-pm.bmc.pm.log 00:02:48.196 17:48:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:48.196 17:48:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:48.196 17:48:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:48.196 17:48:06 -- common/autotest_common.sh@10 -- # set +x 00:02:48.196 17:48:06 -- spdk/autotest.sh@59 -- # create_test_list 00:02:48.196 17:48:06 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:48.196 17:48:06 -- common/autotest_common.sh@10 -- # set +x 00:02:48.196 17:48:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:48.196 17:48:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:48.196 17:48:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:48.196 17:48:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:48.196 17:48:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:48.196 17:48:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:48.196 17:48:06 -- common/autotest_common.sh@1457 -- # uname 00:02:48.196 17:48:06 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:48.196 17:48:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:48.196 17:48:06 -- common/autotest_common.sh@1477 -- # uname 00:02:48.196 17:48:06 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:48.196 17:48:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:48.196 17:48:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:48.458 lcov: LCOV version 1.15 00:02:48.458 17:48:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:03.378 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:03.378 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:18.292 17:48:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:18.292 17:48:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:18.293 17:48:35 -- common/autotest_common.sh@10 -- # set +x 00:03:18.293 17:48:35 -- spdk/autotest.sh@78 -- # rm -f 00:03:18.293 17:48:35 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.840 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:20.840 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:20.840 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:21.101 17:48:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:21.101 17:48:39 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:21.101 17:48:39 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:21.101 17:48:39 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:21.101 17:48:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:21.101 17:48:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:21.101 17:48:39 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:21.101 17:48:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.101 17:48:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:21.101 17:48:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:21.101 17:48:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:21.101 17:48:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:21.101 17:48:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:21.101 17:48:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:21.101 17:48:39 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:21.101 No valid GPT data, bailing 00:03:21.101 17:48:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:21.101 17:48:39 -- scripts/common.sh@394 -- # pt= 00:03:21.101 17:48:39 -- scripts/common.sh@395 -- # return 1 00:03:21.101 17:48:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:21.101 1+0 records in 00:03:21.101 1+0 records out 00:03:21.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446875 s, 235 MB/s 00:03:21.101 17:48:39 -- spdk/autotest.sh@105 -- # sync 00:03:21.101 17:48:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:21.101 17:48:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:21.101 17:48:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:29.250 17:48:47 -- spdk/autotest.sh@111 -- # uname -s 00:03:29.250 17:48:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:29.250 17:48:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:29.250 17:48:47 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:33.456 Hugepages 00:03:33.456 node hugesize free / total 00:03:33.456 node0 1048576kB 0 / 0 00:03:33.456 node0 2048kB 0 / 0 00:03:33.456 node1 1048576kB 0 / 0 00:03:33.456 node1 2048kB 0 / 0 00:03:33.456 00:03:33.456 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:33.456 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:33.456 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:33.456 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:33.456 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:33.456 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:33.456 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:33.456 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:33.456 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:33.456 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:33.456 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:33.456 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:33.456 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:33.456 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:33.456 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:33.456 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:33.457 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:33.457 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:33.457 17:48:51 -- spdk/autotest.sh@117 -- # uname -s 00:03:33.457 17:48:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:33.457 17:48:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:33.457 17:48:51 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:36.759 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:36.759 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:36.759 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:36.759 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:36.759 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:36.759 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:37.019 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:37.019 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:37.019 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:37.019 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:37.019 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:37.019 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:37.019 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:37.019 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:37.019 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:37.019 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.933 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:39.197 17:48:57 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:40.138 17:48:58 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:40.138 17:48:58 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:40.138 17:48:58 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:40.138 17:48:58 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:40.138 17:48:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:40.138 17:48:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:40.139 17:48:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:40.139 17:48:58 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:40.139 17:48:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:40.139 17:48:58 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:40.139 17:48:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:40.139 17:48:58 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.344 Waiting for block devices as requested 00:03:44.345 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:44.345 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:44.345 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:44.345 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:44.345 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:44.345 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:44.345 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:44.606 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:44.606 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:44.867 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:44.867 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:44.867 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:44.867 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:45.128 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:45.128 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:45.128 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:45.128 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:45.702 17:49:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:45.702 17:49:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:45.702 17:49:03 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:45.702 17:49:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:45.702 17:49:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:45.702 17:49:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:45.702 17:49:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:45.702 17:49:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:45.702 17:49:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:45.702 17:49:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:45.702 17:49:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:45.702 17:49:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:45.702 17:49:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:45.702 17:49:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:45.702 17:49:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:45.702 17:49:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:45.702 17:49:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:45.702 17:49:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:45.702 17:49:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:45.702 17:49:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:45.703 17:49:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:45.703 17:49:03 -- common/autotest_common.sh@1543 -- # continue 00:03:45.703 17:49:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:45.703 17:49:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:45.703 17:49:03 -- common/autotest_common.sh@10 -- # set +x 00:03:45.703 17:49:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:45.703 17:49:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.703 17:49:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.703 17:49:04 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:49.911 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:49.911 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:49.911 17:49:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:49.911 17:49:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.911 17:49:08 -- common/autotest_common.sh@10 -- # set +x 00:03:49.911 17:49:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:49.911 17:49:08 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:49.911 17:49:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:49.911 17:49:08 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:49.911 17:49:08 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:49.911 17:49:08 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:49.911 17:49:08 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:49.911 17:49:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:49.911 17:49:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:49.911 17:49:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:49.911 17:49:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:49.911 17:49:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:49.911 17:49:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:50.172 17:49:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:50.172 17:49:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:50.172 17:49:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.172 17:49:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:50.172 17:49:08 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:50.172 17:49:08 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:50.172 17:49:08 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:50.172 17:49:08 -- common/autotest_common.sh@1572 -- # return 0 00:03:50.172 17:49:08 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:50.172 17:49:08 -- common/autotest_common.sh@1580 -- # return 0 00:03:50.172 17:49:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:50.172 17:49:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:50.172 17:49:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.172 17:49:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.172 17:49:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:50.172 17:49:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.172 17:49:08 -- common/autotest_common.sh@10 -- # set +x 00:03:50.172 17:49:08 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:50.172 17:49:08 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:50.172 17:49:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.172 17:49:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.172 17:49:08 -- common/autotest_common.sh@10 -- # set +x 00:03:50.172 ************************************ 00:03:50.172 START TEST env 00:03:50.172 ************************************ 00:03:50.172 17:49:08 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:50.172 * Looking for test storage... 00:03:50.172 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:50.172 17:49:08 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:50.172 17:49:08 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:50.172 17:49:08 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:50.433 17:49:08 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:50.433 17:49:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.433 17:49:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.433 17:49:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.433 17:49:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.433 17:49:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.433 17:49:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.433 17:49:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.433 17:49:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.433 17:49:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.433 17:49:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.433 17:49:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.433 17:49:08 env -- scripts/common.sh@344 -- # case "$op" in 00:03:50.433 17:49:08 env -- scripts/common.sh@345 -- # : 1 00:03:50.433 17:49:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.433 17:49:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.433 17:49:08 env -- scripts/common.sh@365 -- # decimal 1 00:03:50.433 17:49:08 env -- scripts/common.sh@353 -- # local d=1 00:03:50.433 17:49:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.433 17:49:08 env -- scripts/common.sh@355 -- # echo 1 00:03:50.433 17:49:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.433 17:49:08 env -- scripts/common.sh@366 -- # decimal 2 00:03:50.433 17:49:08 env -- scripts/common.sh@353 -- # local d=2 00:03:50.433 17:49:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.433 17:49:08 env -- scripts/common.sh@355 -- # echo 2 00:03:50.433 17:49:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.433 17:49:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.433 17:49:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.433 17:49:08 env -- scripts/common.sh@368 -- # return 0 00:03:50.433 17:49:08 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.433 17:49:08 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:50.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.433 --rc genhtml_branch_coverage=1 00:03:50.433 --rc genhtml_function_coverage=1 00:03:50.433 --rc genhtml_legend=1 00:03:50.433 --rc geninfo_all_blocks=1 00:03:50.433 --rc geninfo_unexecuted_blocks=1 00:03:50.433 00:03:50.433 ' 00:03:50.433 17:49:08 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:50.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.433 --rc genhtml_branch_coverage=1 00:03:50.433 --rc genhtml_function_coverage=1 00:03:50.433 --rc genhtml_legend=1 00:03:50.433 --rc geninfo_all_blocks=1 00:03:50.433 --rc geninfo_unexecuted_blocks=1 00:03:50.433 00:03:50.433 ' 00:03:50.433 17:49:08 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:50.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.433 --rc genhtml_branch_coverage=1 00:03:50.433 --rc genhtml_function_coverage=1 00:03:50.433 --rc genhtml_legend=1 00:03:50.433 --rc geninfo_all_blocks=1 00:03:50.433 --rc geninfo_unexecuted_blocks=1 00:03:50.433 00:03:50.433 ' 00:03:50.433 17:49:08 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:50.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.433 --rc genhtml_branch_coverage=1 00:03:50.433 --rc genhtml_function_coverage=1 00:03:50.433 --rc genhtml_legend=1 00:03:50.433 --rc geninfo_all_blocks=1 00:03:50.433 --rc geninfo_unexecuted_blocks=1 00:03:50.433 00:03:50.433 ' 00:03:50.433 17:49:08 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:50.433 17:49:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.433 17:49:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.433 17:49:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.433 ************************************ 00:03:50.433 START TEST env_memory 00:03:50.433 ************************************ 00:03:50.433 17:49:08 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:50.433 00:03:50.433 00:03:50.433 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.433 http://cunit.sourceforge.net/ 00:03:50.433 00:03:50.433 00:03:50.433 Suite: memory 00:03:50.433 Test: alloc and free memory map ...[2024-11-20 17:49:08.849046] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:50.434 passed 00:03:50.434 Test: mem map translation ...[2024-11-20 17:49:08.874668] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:50.434 [2024-11-20 17:49:08.874698] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:50.434 [2024-11-20 17:49:08.874744] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:50.434 [2024-11-20 17:49:08.874752] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:50.434 passed 00:03:50.434 Test: mem map registration ...[2024-11-20 17:49:08.930083] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:50.434 [2024-11-20 17:49:08.930119] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:50.434 passed 00:03:50.696 Test: mem map adjacent registrations ...passed 00:03:50.696 00:03:50.696 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.696 suites 1 1 n/a 0 0 00:03:50.696 tests 4 4 4 0 0 00:03:50.696 asserts 152 152 152 0 n/a 00:03:50.696 00:03:50.696 Elapsed time = 0.201 seconds 00:03:50.696 00:03:50.696 real 0m0.216s 00:03:50.696 user 0m0.203s 00:03:50.696 sys 0m0.012s 00:03:50.696 17:49:09 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.696 17:49:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:50.696 ************************************ 00:03:50.696 END TEST env_memory 00:03:50.696 ************************************ 00:03:50.696 17:49:09 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:50.696 17:49:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.696 17:49:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.696 17:49:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.696 ************************************ 00:03:50.696 START TEST env_vtophys 00:03:50.696 ************************************ 00:03:50.696 17:49:09 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:50.696 EAL: lib.eal log level changed from notice to debug 00:03:50.696 EAL: Detected lcore 0 as core 0 on socket 0 00:03:50.696 EAL: Detected lcore 1 as core 1 on socket 0 00:03:50.696 EAL: Detected lcore 2 as core 2 on socket 0 00:03:50.696 EAL: Detected lcore 3 as core 3 on socket 0 00:03:50.696 EAL: Detected lcore 4 as core 4 on socket 0 00:03:50.696 EAL: Detected lcore 5 as core 5 on socket 0 00:03:50.696 EAL: Detected lcore 6 as core 6 on socket 0 00:03:50.696 EAL: Detected lcore 7 as core 7 on socket 0 00:03:50.696 EAL: Detected lcore 8 as core 8 on socket 0 00:03:50.696 EAL: Detected lcore 9 as core 9 on socket 0 00:03:50.696 EAL: Detected lcore 10 as core 10 on socket 0 00:03:50.696 EAL: Detected lcore 11 as core 11 on socket 0 00:03:50.696 EAL: Detected lcore 12 as core 12 on socket 0 00:03:50.696 EAL: Detected lcore 13 as core 13 on socket 0 00:03:50.696 EAL: Detected lcore 14 as core 14 on socket 0 00:03:50.696 EAL: Detected lcore 15 as core 15 on socket 0 00:03:50.696 EAL: Detected lcore 16 as core 16 on socket 0 00:03:50.696 EAL: Detected lcore 17 as core 17 on socket 0 00:03:50.696 EAL: Detected lcore 18 as core 18 on socket 0 00:03:50.696 EAL: Detected lcore 19 as core 19 on socket 0 00:03:50.696 EAL: Detected lcore 20 as core 20 on socket 0 00:03:50.696 EAL: Detected lcore 21 as core 21 on socket 0 00:03:50.696 EAL: Detected lcore 22 as core 22 on socket 0 00:03:50.696 EAL: Detected lcore 23 as core 23 on socket 0 00:03:50.696 EAL: Detected lcore 24 as core 24 on socket 0 00:03:50.696 EAL: Detected lcore 25 as core 25 on socket 0 00:03:50.696 EAL: Detected lcore 26 as core 26 on socket 0 00:03:50.696 EAL: Detected lcore 27 as core 27 on socket 0 00:03:50.696 EAL: Detected lcore 28 as core 28 on socket 0 00:03:50.696 EAL: Detected lcore 29 as core 29 on socket 0 00:03:50.696 EAL: Detected lcore 30 as core 30 on socket 0 00:03:50.696 EAL: Detected lcore 31 as core 31 on socket 0 00:03:50.696 EAL: Detected lcore 32 as core 32 on socket 0 00:03:50.696 EAL: Detected lcore 33 as core 33 on socket 0 00:03:50.696 EAL: Detected lcore 34 as core 34 on socket 0 00:03:50.696 EAL: Detected lcore 35 as core 35 on socket 0 00:03:50.696 EAL: Detected lcore 36 as core 0 on socket 1 00:03:50.696 EAL: Detected lcore 37 as core 1 on socket 1 00:03:50.696 EAL: Detected lcore 38 as core 2 on socket 1 00:03:50.696 EAL: Detected lcore 39 as core 3 on socket 1 00:03:50.696 EAL: Detected lcore 40 as core 4 on socket 1 00:03:50.696 EAL: Detected lcore 41 as core 5 on socket 1 00:03:50.696 EAL: Detected lcore 42 as core 6 on socket 1 00:03:50.696 EAL: Detected lcore 43 as core 7 on socket 1 00:03:50.696 EAL: Detected lcore 44 as core 8 on socket 1 00:03:50.696 EAL: Detected lcore 45 as core 9 on socket 1 00:03:50.696 EAL: Detected lcore 46 as core 10 on socket 1 00:03:50.696 EAL: Detected lcore 47 as core 11 on socket 1 00:03:50.696 EAL: Detected lcore 48 as core 12 on socket 1 00:03:50.696 EAL: Detected lcore 49 as core 13 on socket 1 00:03:50.696 EAL: Detected lcore 50 as core 14 on socket 1 00:03:50.696 EAL: Detected lcore 51 as core 15 on socket 1 00:03:50.696 EAL: Detected lcore 52 as core 16 on socket 1 00:03:50.696 EAL: Detected lcore 53 as core 17 on socket 1 00:03:50.696 EAL: Detected lcore 54 as core 18 on socket 1 00:03:50.696 EAL: Detected lcore 55 as core 19 on socket 1 00:03:50.696 EAL: Detected lcore 56 as core 20 on socket 1 00:03:50.696 EAL: Detected lcore 57 as core 21 on socket 1 00:03:50.696 EAL: Detected lcore 58 as core 22 on socket 1 00:03:50.696 EAL: Detected lcore 59 as core 23 on socket 1 00:03:50.696 EAL: Detected lcore 60 as core 24 on socket 1 00:03:50.696 EAL: Detected lcore 61 as core 25 on socket 1 00:03:50.696 EAL: Detected lcore 62 as core 26 on socket 1 00:03:50.696 EAL: Detected lcore 63 as core 27 on socket 1 00:03:50.696 EAL: Detected lcore 64 as core 28 on socket 1 00:03:50.696 EAL: Detected lcore 65 as core 29 on socket 1 00:03:50.696 EAL: Detected lcore 66 as core 30 on socket 1 00:03:50.696 EAL: Detected lcore 67 as core 31 on socket 1 00:03:50.696 EAL: Detected lcore 68 as core 32 on socket 1 00:03:50.696 EAL: Detected lcore 69 as core 33 on socket 1 00:03:50.696 EAL: Detected lcore 70 as core 34 on socket 1 00:03:50.696 EAL: Detected lcore 71 as core 35 on socket 1 00:03:50.696 EAL: Detected lcore 72 as core 0 on socket 0 00:03:50.696 EAL: Detected lcore 73 as core 1 on socket 0 00:03:50.696 EAL: Detected lcore 74 as core 2 on socket 0 00:03:50.696 EAL: Detected lcore 75 as core 3 on socket 0 00:03:50.696 EAL: Detected lcore 76 as core 4 on socket 0 00:03:50.696 EAL: Detected lcore 77 as core 5 on socket 0 00:03:50.696 EAL: Detected lcore 78 as core 6 on socket 0 00:03:50.696 EAL: Detected lcore 79 as core 7 on socket 0 00:03:50.696 EAL: Detected lcore 80 as core 8 on socket 0 00:03:50.696 EAL: Detected lcore 81 as core 9 on socket 0 00:03:50.696 EAL: Detected lcore 82 as core 10 on socket 0 00:03:50.696 EAL: Detected lcore 83 as core 11 on socket 0 00:03:50.696 EAL: Detected lcore 84 as core 12 on socket 0 00:03:50.696 EAL: Detected lcore 85 as core 13 on socket 0 00:03:50.696 EAL: Detected lcore 86 as core 14 on socket 0 00:03:50.696 EAL: Detected lcore 87 as core 15 on socket 0 00:03:50.696 EAL: Detected lcore 88 as core 16 on socket 0 00:03:50.696 EAL: Detected lcore 89 as core 17 on socket 0 00:03:50.696 EAL: Detected lcore 90 as core 18 on socket 0 00:03:50.696 EAL: Detected lcore 91 as core 19 on socket 0 00:03:50.696 EAL: Detected lcore 92 as core 20 on socket 0 00:03:50.696 EAL: Detected lcore 93 as core 21 on socket 0 00:03:50.696 EAL: Detected lcore 94 as core 22 on socket 0 00:03:50.696 EAL: Detected lcore 95 as core 23 on socket 0 00:03:50.696 EAL: Detected lcore 96 as core 24 on socket 0 00:03:50.696 EAL: Detected lcore 97 as core 25 on socket 0 00:03:50.696 EAL: Detected lcore 98 as core 26 on socket 0 00:03:50.696 EAL: Detected lcore 99 as core 27 on socket 0 00:03:50.696 EAL: Detected lcore 100 as core 28 on socket 0 00:03:50.696 EAL: Detected lcore 101 as core 29 on socket 0 00:03:50.696 EAL: Detected lcore 102 as core 30 on socket 0 00:03:50.696 EAL: Detected lcore 103 as core 31 on socket 0 00:03:50.696 EAL: Detected lcore 104 as core 32 on socket 0 00:03:50.696 EAL: Detected lcore 105 as core 33 on socket 0 00:03:50.697 EAL: Detected lcore 106 as core 34 on socket 0 00:03:50.697 EAL: Detected lcore 107 as core 35 on socket 0 00:03:50.697 EAL: Detected lcore 108 as core 0 on socket 1 00:03:50.697 EAL: Detected lcore 109 as core 1 on socket 1 00:03:50.697 EAL: Detected lcore 110 as core 2 on socket 1 00:03:50.697 EAL: Detected lcore 111 as core 3 on socket 1 00:03:50.697 EAL: Detected lcore 112 as core 4 on socket 1 00:03:50.697 EAL: Detected lcore 113 as core 5 on socket 1 00:03:50.697 EAL: Detected lcore 114 as core 6 on socket 1 00:03:50.697 EAL: Detected lcore 115 as core 7 on socket 1 00:03:50.697 EAL: Detected lcore 116 as core 8 on socket 1 00:03:50.697 EAL: Detected lcore 117 as core 9 on socket 1 00:03:50.697 EAL: Detected lcore 118 as core 10 on socket 1 00:03:50.697 EAL: Detected lcore 119 as core 11 on socket 1 00:03:50.697 EAL: Detected lcore 120 as core 12 on socket 1 00:03:50.697 EAL: Detected lcore 121 as core 13 on socket 1 00:03:50.697 EAL: Detected lcore 122 as core 14 on socket 1 00:03:50.697 EAL: Detected lcore 123 as core 15 on socket 1 00:03:50.697 EAL: Detected lcore 124 as core 16 on socket 1 00:03:50.697 EAL: Detected lcore 125 as core 17 on socket 1 00:03:50.697 EAL: Detected lcore 126 as core 18 on socket 1 00:03:50.697 EAL: Detected lcore 127 as core 19 on socket 1 00:03:50.697 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:50.697 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:50.697 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:50.697 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:50.697 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:50.697 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:50.697 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:50.697 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:50.697 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:50.697 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:50.697 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:50.697 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:50.697 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:50.697 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:50.697 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:50.697 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:50.697 EAL: Maximum logical cores by configuration: 128 00:03:50.697 EAL: Detected CPU lcores: 128 00:03:50.697 EAL: Detected NUMA nodes: 2 00:03:50.697 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:50.697 EAL: Detected shared linkage of DPDK 00:03:50.697 EAL: No shared files mode enabled, IPC will be disabled 00:03:50.697 EAL: Bus pci wants IOVA as 'DC' 00:03:50.697 EAL: Buses did not request a specific IOVA mode. 00:03:50.697 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:50.697 EAL: Selected IOVA mode 'VA' 00:03:50.697 EAL: Probing VFIO support... 00:03:50.697 EAL: IOMMU type 1 (Type 1) is supported 00:03:50.697 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:50.697 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:50.697 EAL: VFIO support initialized 00:03:50.697 EAL: Ask a virtual area of 0x2e000 bytes 00:03:50.697 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:50.697 EAL: Setting up physically contiguous memory... 00:03:50.697 EAL: Setting maximum number of open files to 524288 00:03:50.697 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:50.697 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:50.697 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:50.697 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.697 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:50.697 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.697 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.697 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:50.697 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:50.697 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.697 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:50.697 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.697 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.697 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:50.697 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:50.697 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.697 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:50.697 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.697 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.697 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:50.697 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:50.697 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.697 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:50.697 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.697 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.697 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:50.697 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:50.697 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:50.697 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.697 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:50.697 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.697 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.697 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:50.697 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:50.697 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.697 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:50.697 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.697 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.697 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:50.697 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:50.697 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.697 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:50.697 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.697 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.697 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:50.697 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:50.697 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.697 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:50.697 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.697 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.697 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:50.697 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:50.697 EAL: Hugepages will be freed exactly as allocated. 00:03:50.697 EAL: No shared files mode enabled, IPC is disabled 00:03:50.697 EAL: No shared files mode enabled, IPC is disabled 00:03:50.697 EAL: TSC frequency is ~2400000 KHz 00:03:50.697 EAL: Main lcore 0 is ready (tid=7f64a30f8a00;cpuset=[0]) 00:03:50.697 EAL: Trying to obtain current memory policy. 00:03:50.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.697 EAL: Restoring previous memory policy: 0 00:03:50.697 EAL: request: mp_malloc_sync 00:03:50.697 EAL: No shared files mode enabled, IPC is disabled 00:03:50.697 EAL: Heap on socket 0 was expanded by 2MB 00:03:50.697 EAL: No shared files mode enabled, IPC is disabled 00:03:50.697 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:50.697 EAL: Mem event callback 'spdk:(nil)' registered 00:03:50.697 00:03:50.697 00:03:50.697 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.697 http://cunit.sourceforge.net/ 00:03:50.697 00:03:50.697 00:03:50.697 Suite: components_suite 00:03:50.697 Test: vtophys_malloc_test ...passed 00:03:50.697 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:50.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.697 EAL: Restoring previous memory policy: 4 00:03:50.697 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.697 EAL: request: mp_malloc_sync 00:03:50.697 EAL: No shared files mode enabled, IPC is disabled 00:03:50.697 EAL: Heap on socket 0 was expanded by 4MB 00:03:50.697 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.697 EAL: request: mp_malloc_sync 00:03:50.697 EAL: No shared files mode enabled, IPC is disabled 00:03:50.697 EAL: Heap on socket 0 was shrunk by 4MB 00:03:50.697 EAL: Trying to obtain current memory policy. 00:03:50.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.697 EAL: Restoring previous memory policy: 4 00:03:50.697 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.697 EAL: request: mp_malloc_sync 00:03:50.697 EAL: No shared files mode enabled, IPC is disabled 00:03:50.697 EAL: Heap on socket 0 was expanded by 6MB 00:03:50.697 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.697 EAL: request: mp_malloc_sync 00:03:50.697 EAL: No shared files mode enabled, IPC is disabled 00:03:50.697 EAL: Heap on socket 0 was shrunk by 6MB 00:03:50.697 EAL: Trying to obtain current memory policy. 00:03:50.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.698 EAL: Restoring previous memory policy: 4 00:03:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.698 EAL: request: mp_malloc_sync 00:03:50.698 EAL: No shared files mode enabled, IPC is disabled 00:03:50.698 EAL: Heap on socket 0 was expanded by 10MB 00:03:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.698 EAL: request: mp_malloc_sync 00:03:50.698 EAL: No shared files mode enabled, IPC is disabled 00:03:50.698 EAL: Heap on socket 0 was shrunk by 10MB 00:03:50.698 EAL: Trying to obtain current memory policy. 00:03:50.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.698 EAL: Restoring previous memory policy: 4 00:03:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.698 EAL: request: mp_malloc_sync 00:03:50.698 EAL: No shared files mode enabled, IPC is disabled 00:03:50.698 EAL: Heap on socket 0 was expanded by 18MB 00:03:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.698 EAL: request: mp_malloc_sync 00:03:50.698 EAL: No shared files mode enabled, IPC is disabled 00:03:50.698 EAL: Heap on socket 0 was shrunk by 18MB 00:03:50.698 EAL: Trying to obtain current memory policy. 00:03:50.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.698 EAL: Restoring previous memory policy: 4 00:03:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.698 EAL: request: mp_malloc_sync 00:03:50.698 EAL: No shared files mode enabled, IPC is disabled 00:03:50.698 EAL: Heap on socket 0 was expanded by 34MB 00:03:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.698 EAL: request: mp_malloc_sync 00:03:50.698 EAL: No shared files mode enabled, IPC is disabled 00:03:50.698 EAL: Heap on socket 0 was shrunk by 34MB 00:03:50.698 EAL: Trying to obtain current memory policy. 00:03:50.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.698 EAL: Restoring previous memory policy: 4 00:03:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.698 EAL: request: mp_malloc_sync 00:03:50.698 EAL: No shared files mode enabled, IPC is disabled 00:03:50.698 EAL: Heap on socket 0 was expanded by 66MB 00:03:50.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.958 EAL: request: mp_malloc_sync 00:03:50.958 EAL: No shared files mode enabled, IPC is disabled 00:03:50.958 EAL: Heap on socket 0 was shrunk by 66MB 00:03:50.958 EAL: Trying to obtain current memory policy. 00:03:50.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.958 EAL: Restoring previous memory policy: 4 00:03:50.958 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.958 EAL: request: mp_malloc_sync 00:03:50.958 EAL: No shared files mode enabled, IPC is disabled 00:03:50.958 EAL: Heap on socket 0 was expanded by 130MB 00:03:50.958 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.958 EAL: request: mp_malloc_sync 00:03:50.958 EAL: No shared files mode enabled, IPC is disabled 00:03:50.958 EAL: Heap on socket 0 was shrunk by 130MB 00:03:50.958 EAL: Trying to obtain current memory policy. 00:03:50.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.958 EAL: Restoring previous memory policy: 4 00:03:50.958 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.958 EAL: request: mp_malloc_sync 00:03:50.958 EAL: No shared files mode enabled, IPC is disabled 00:03:50.958 EAL: Heap on socket 0 was expanded by 258MB 00:03:50.958 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.959 EAL: request: mp_malloc_sync 00:03:50.959 EAL: No shared files mode enabled, IPC is disabled 00:03:50.959 EAL: Heap on socket 0 was shrunk by 258MB 00:03:50.959 EAL: Trying to obtain current memory policy. 00:03:50.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.959 EAL: Restoring previous memory policy: 4 00:03:50.959 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.959 EAL: request: mp_malloc_sync 00:03:50.959 EAL: No shared files mode enabled, IPC is disabled 00:03:50.959 EAL: Heap on socket 0 was expanded by 514MB 00:03:50.959 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.220 EAL: request: mp_malloc_sync 00:03:51.220 EAL: No shared files mode enabled, IPC is disabled 00:03:51.220 EAL: Heap on socket 0 was shrunk by 514MB 00:03:51.220 EAL: Trying to obtain current memory policy. 00:03:51.220 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.220 EAL: Restoring previous memory policy: 4 00:03:51.220 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.220 EAL: request: mp_malloc_sync 00:03:51.220 EAL: No shared files mode enabled, IPC is disabled 00:03:51.220 EAL: Heap on socket 0 was expanded by 1026MB 00:03:51.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.480 EAL: request: mp_malloc_sync 00:03:51.480 EAL: No shared files mode enabled, IPC is disabled 00:03:51.480 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:51.480 passed 00:03:51.480 00:03:51.480 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.480 suites 1 1 n/a 0 0 00:03:51.480 tests 2 2 2 0 0 00:03:51.480 asserts 497 497 497 0 n/a 00:03:51.480 00:03:51.480 Elapsed time = 0.658 seconds 00:03:51.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.480 EAL: request: mp_malloc_sync 00:03:51.480 EAL: No shared files mode enabled, IPC is disabled 00:03:51.480 EAL: Heap on socket 0 was shrunk by 2MB 00:03:51.480 EAL: No shared files mode enabled, IPC is disabled 00:03:51.480 EAL: No shared files mode enabled, IPC is disabled 00:03:51.480 EAL: No shared files mode enabled, IPC is disabled 00:03:51.480 00:03:51.480 real 0m0.802s 00:03:51.480 user 0m0.422s 00:03:51.480 sys 0m0.349s 00:03:51.480 17:49:09 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.480 17:49:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:51.480 ************************************ 00:03:51.480 END TEST env_vtophys 00:03:51.480 ************************************ 00:03:51.480 17:49:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:51.481 17:49:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.481 17:49:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.481 17:49:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.481 ************************************ 00:03:51.481 START TEST env_pci 00:03:51.481 ************************************ 00:03:51.481 17:49:09 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:51.481 00:03:51.481 00:03:51.481 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.481 http://cunit.sourceforge.net/ 00:03:51.481 00:03:51.481 00:03:51.481 Suite: pci 00:03:51.481 Test: pci_hook ...[2024-11-20 17:49:09.986349] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3319456 has claimed it 00:03:51.742 EAL: Cannot find device (10000:00:01.0) 00:03:51.742 EAL: Failed to attach device on primary process 00:03:51.742 passed 00:03:51.742 00:03:51.742 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.742 suites 1 1 n/a 0 0 00:03:51.742 tests 1 1 1 0 0 00:03:51.742 asserts 25 25 25 0 n/a 00:03:51.742 00:03:51.742 Elapsed time = 0.035 seconds 00:03:51.742 00:03:51.742 real 0m0.056s 00:03:51.742 user 0m0.013s 00:03:51.742 sys 0m0.043s 00:03:51.742 17:49:10 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.742 17:49:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:51.742 ************************************ 00:03:51.742 END TEST env_pci 00:03:51.742 ************************************ 00:03:51.742 17:49:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:51.742 17:49:10 env -- env/env.sh@15 -- # uname 00:03:51.742 17:49:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:51.742 17:49:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:51.742 17:49:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:51.742 17:49:10 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:51.742 17:49:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.742 17:49:10 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.742 ************************************ 00:03:51.742 START TEST env_dpdk_post_init 00:03:51.742 ************************************ 00:03:51.742 17:49:10 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:51.742 EAL: Detected CPU lcores: 128 00:03:51.742 EAL: Detected NUMA nodes: 2 00:03:51.742 EAL: Detected shared linkage of DPDK 00:03:51.742 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:51.742 EAL: Selected IOVA mode 'VA' 00:03:51.742 EAL: VFIO support initialized 00:03:51.742 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:51.742 EAL: Using IOMMU type 1 (Type 1) 00:03:52.003 EAL: Ignore mapping IO port bar(1) 00:03:52.003 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:52.265 EAL: Ignore mapping IO port bar(1) 00:03:52.265 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:52.525 EAL: Ignore mapping IO port bar(1) 00:03:52.525 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:52.525 EAL: Ignore mapping IO port bar(1) 00:03:52.787 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:52.787 EAL: Ignore mapping IO port bar(1) 00:03:53.048 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:53.048 EAL: Ignore mapping IO port bar(1) 00:03:53.310 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:53.310 EAL: Ignore mapping IO port bar(1) 00:03:53.310 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:53.572 EAL: Ignore mapping IO port bar(1) 00:03:53.572 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:53.833 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:54.094 EAL: Ignore mapping IO port bar(1) 00:03:54.094 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:54.094 EAL: Ignore mapping IO port bar(1) 00:03:54.378 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:54.378 EAL: Ignore mapping IO port bar(1) 00:03:54.709 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:54.709 EAL: Ignore mapping IO port bar(1) 00:03:54.709 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:54.994 EAL: Ignore mapping IO port bar(1) 00:03:54.994 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:54.994 EAL: Ignore mapping IO port bar(1) 00:03:55.254 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:55.254 EAL: Ignore mapping IO port bar(1) 00:03:55.515 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:55.515 EAL: Ignore mapping IO port bar(1) 00:03:55.775 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:55.775 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:55.775 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:55.775 Starting DPDK initialization... 00:03:55.775 Starting SPDK post initialization... 00:03:55.775 SPDK NVMe probe 00:03:55.775 Attaching to 0000:65:00.0 00:03:55.775 Attached to 0000:65:00.0 00:03:55.775 Cleaning up... 00:03:57.691 00:03:57.691 real 0m5.741s 00:03:57.691 user 0m0.107s 00:03:57.691 sys 0m0.186s 00:03:57.691 17:49:15 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.691 17:49:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:57.691 ************************************ 00:03:57.691 END TEST env_dpdk_post_init 00:03:57.691 ************************************ 00:03:57.691 17:49:15 env -- env/env.sh@26 -- # uname 00:03:57.691 17:49:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:57.691 17:49:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:57.691 17:49:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.691 17:49:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.692 17:49:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 ************************************ 00:03:57.692 START TEST env_mem_callbacks 00:03:57.692 ************************************ 00:03:57.692 17:49:15 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:57.692 EAL: Detected CPU lcores: 128 00:03:57.692 EAL: Detected NUMA nodes: 2 00:03:57.692 EAL: Detected shared linkage of DPDK 00:03:57.692 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:57.692 EAL: Selected IOVA mode 'VA' 00:03:57.692 EAL: VFIO support initialized 00:03:57.692 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:57.692 00:03:57.692 00:03:57.692 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.692 http://cunit.sourceforge.net/ 00:03:57.692 00:03:57.692 00:03:57.692 Suite: memory 00:03:57.692 Test: test ... 00:03:57.692 register 0x200000200000 2097152 00:03:57.692 malloc 3145728 00:03:57.692 register 0x200000400000 4194304 00:03:57.692 buf 0x200000500000 len 3145728 PASSED 00:03:57.692 malloc 64 00:03:57.692 buf 0x2000004fff40 len 64 PASSED 00:03:57.692 malloc 4194304 00:03:57.692 register 0x200000800000 6291456 00:03:57.692 buf 0x200000a00000 len 4194304 PASSED 00:03:57.692 free 0x200000500000 3145728 00:03:57.692 free 0x2000004fff40 64 00:03:57.692 unregister 0x200000400000 4194304 PASSED 00:03:57.692 free 0x200000a00000 4194304 00:03:57.692 unregister 0x200000800000 6291456 PASSED 00:03:57.692 malloc 8388608 00:03:57.692 register 0x200000400000 10485760 00:03:57.692 buf 0x200000600000 len 8388608 PASSED 00:03:57.692 free 0x200000600000 8388608 00:03:57.692 unregister 0x200000400000 10485760 PASSED 00:03:57.692 passed 00:03:57.692 00:03:57.692 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.692 suites 1 1 n/a 0 0 00:03:57.692 tests 1 1 1 0 0 00:03:57.692 asserts 15 15 15 0 n/a 00:03:57.692 00:03:57.692 Elapsed time = 0.007 seconds 00:03:57.692 00:03:57.692 real 0m0.068s 00:03:57.692 user 0m0.021s 00:03:57.692 sys 0m0.047s 00:03:57.692 17:49:16 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.692 17:49:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 ************************************ 00:03:57.692 END TEST env_mem_callbacks 00:03:57.692 ************************************ 00:03:57.692 00:03:57.692 real 0m7.484s 00:03:57.692 user 0m1.034s 00:03:57.692 sys 0m1.002s 00:03:57.692 17:49:16 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.692 17:49:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 ************************************ 00:03:57.692 END TEST env 00:03:57.692 ************************************ 00:03:57.692 17:49:16 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:57.692 17:49:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.692 17:49:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.692 17:49:16 -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 ************************************ 00:03:57.692 START TEST rpc 00:03:57.692 ************************************ 00:03:57.692 17:49:16 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:57.692 * Looking for test storage... 00:03:57.953 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:57.953 17:49:16 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.953 17:49:16 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.953 17:49:16 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.953 17:49:16 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.953 17:49:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.953 17:49:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.953 17:49:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.953 17:49:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.953 17:49:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.953 17:49:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.953 17:49:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.953 17:49:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.953 17:49:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.953 17:49:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.953 17:49:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.953 17:49:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:57.953 17:49:16 rpc -- scripts/common.sh@345 -- # : 1 00:03:57.954 17:49:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.954 17:49:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.954 17:49:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:57.954 17:49:16 rpc -- scripts/common.sh@353 -- # local d=1 00:03:57.954 17:49:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.954 17:49:16 rpc -- scripts/common.sh@355 -- # echo 1 00:03:57.954 17:49:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.954 17:49:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:57.954 17:49:16 rpc -- scripts/common.sh@353 -- # local d=2 00:03:57.954 17:49:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.954 17:49:16 rpc -- scripts/common.sh@355 -- # echo 2 00:03:57.954 17:49:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.954 17:49:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.954 17:49:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.954 17:49:16 rpc -- scripts/common.sh@368 -- # return 0 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.954 --rc genhtml_branch_coverage=1 00:03:57.954 --rc genhtml_function_coverage=1 00:03:57.954 --rc genhtml_legend=1 00:03:57.954 --rc geninfo_all_blocks=1 00:03:57.954 --rc geninfo_unexecuted_blocks=1 00:03:57.954 00:03:57.954 ' 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.954 --rc genhtml_branch_coverage=1 00:03:57.954 --rc genhtml_function_coverage=1 00:03:57.954 --rc genhtml_legend=1 00:03:57.954 --rc geninfo_all_blocks=1 00:03:57.954 --rc geninfo_unexecuted_blocks=1 00:03:57.954 00:03:57.954 ' 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.954 --rc genhtml_branch_coverage=1 00:03:57.954 --rc genhtml_function_coverage=1 00:03:57.954 --rc genhtml_legend=1 00:03:57.954 --rc geninfo_all_blocks=1 00:03:57.954 --rc geninfo_unexecuted_blocks=1 00:03:57.954 00:03:57.954 ' 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.954 --rc genhtml_branch_coverage=1 00:03:57.954 --rc genhtml_function_coverage=1 00:03:57.954 --rc genhtml_legend=1 00:03:57.954 --rc geninfo_all_blocks=1 00:03:57.954 --rc geninfo_unexecuted_blocks=1 00:03:57.954 00:03:57.954 ' 00:03:57.954 17:49:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3320743 00:03:57.954 17:49:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.954 17:49:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3320743 00:03:57.954 17:49:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@835 -- # '[' -z 3320743 ']' 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:57.954 17:49:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.954 [2024-11-20 17:49:16.389062] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:03:57.954 [2024-11-20 17:49:16.389132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320743 ] 00:03:57.954 [2024-11-20 17:49:16.474865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.215 [2024-11-20 17:49:16.516258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:58.215 [2024-11-20 17:49:16.516296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3320743' to capture a snapshot of events at runtime. 00:03:58.215 [2024-11-20 17:49:16.516304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:58.215 [2024-11-20 17:49:16.516310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:58.215 [2024-11-20 17:49:16.516317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3320743 for offline analysis/debug. 00:03:58.215 [2024-11-20 17:49:16.516953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.787 17:49:17 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.787 17:49:17 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:58.787 17:49:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:58.787 17:49:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:58.787 17:49:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:58.787 17:49:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:58.787 17:49:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.787 17:49:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.787 17:49:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.787 ************************************ 00:03:58.787 START TEST rpc_integrity 00:03:58.787 ************************************ 00:03:58.787 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:58.787 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.787 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.787 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.787 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.787 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.787 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:58.787 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:58.787 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:58.787 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.787 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.787 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.787 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:58.787 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:58.787 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.787 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.787 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.787 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:58.787 { 00:03:58.787 "name": "Malloc0", 00:03:58.787 "aliases": [ 00:03:58.787 "852fc7f8-bb50-4047-87cf-aebc09565588" 00:03:58.787 ], 00:03:58.787 "product_name": "Malloc disk", 00:03:58.787 "block_size": 512, 00:03:58.787 "num_blocks": 16384, 00:03:58.787 "uuid": "852fc7f8-bb50-4047-87cf-aebc09565588", 00:03:58.787 "assigned_rate_limits": { 00:03:58.787 "rw_ios_per_sec": 0, 00:03:58.787 "rw_mbytes_per_sec": 0, 00:03:58.787 "r_mbytes_per_sec": 0, 00:03:58.787 "w_mbytes_per_sec": 0 00:03:58.787 }, 00:03:58.787 "claimed": false, 00:03:58.787 "zoned": false, 00:03:58.787 "supported_io_types": { 00:03:58.787 "read": true, 00:03:58.787 "write": true, 00:03:58.787 "unmap": true, 00:03:58.787 "flush": true, 00:03:58.787 "reset": true, 00:03:58.787 "nvme_admin": false, 00:03:58.787 "nvme_io": false, 00:03:58.787 "nvme_io_md": false, 00:03:58.787 "write_zeroes": true, 00:03:58.787 "zcopy": true, 00:03:58.787 "get_zone_info": false, 00:03:58.787 "zone_management": false, 00:03:58.787 "zone_append": false, 00:03:58.787 "compare": false, 00:03:58.787 "compare_and_write": false, 00:03:58.787 "abort": true, 00:03:58.787 "seek_hole": false, 00:03:58.787 "seek_data": false, 00:03:58.787 "copy": true, 00:03:58.787 "nvme_iov_md": false 00:03:58.787 }, 00:03:58.787 "memory_domains": [ 00:03:58.787 { 00:03:58.787 "dma_device_id": "system", 00:03:58.787 "dma_device_type": 1 00:03:58.787 }, 00:03:58.787 { 00:03:58.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.787 "dma_device_type": 2 00:03:58.787 } 00:03:58.787 ], 00:03:58.787 "driver_specific": {} 00:03:58.787 } 00:03:58.787 ]' 00:03:58.787 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.048 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.048 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:59.048 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.048 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.048 [2024-11-20 17:49:17.358739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:59.048 [2024-11-20 17:49:17.358772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.048 [2024-11-20 17:49:17.358786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b99f00 00:03:59.048 [2024-11-20 17:49:17.358793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.048 [2024-11-20 17:49:17.360158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.048 [2024-11-20 17:49:17.360182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.048 Passthru0 00:03:59.048 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.048 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.048 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.048 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.048 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.048 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.048 { 00:03:59.048 "name": "Malloc0", 00:03:59.048 "aliases": [ 00:03:59.048 "852fc7f8-bb50-4047-87cf-aebc09565588" 00:03:59.048 ], 00:03:59.048 "product_name": "Malloc disk", 00:03:59.048 "block_size": 512, 00:03:59.048 "num_blocks": 16384, 00:03:59.048 "uuid": "852fc7f8-bb50-4047-87cf-aebc09565588", 00:03:59.048 "assigned_rate_limits": { 00:03:59.048 "rw_ios_per_sec": 0, 00:03:59.048 "rw_mbytes_per_sec": 0, 00:03:59.048 "r_mbytes_per_sec": 0, 00:03:59.048 "w_mbytes_per_sec": 0 00:03:59.048 }, 00:03:59.048 "claimed": true, 00:03:59.048 "claim_type": "exclusive_write", 00:03:59.048 "zoned": false, 00:03:59.048 "supported_io_types": { 00:03:59.048 "read": true, 00:03:59.048 "write": true, 00:03:59.048 "unmap": true, 00:03:59.048 "flush": true, 00:03:59.048 "reset": true, 00:03:59.048 "nvme_admin": false, 00:03:59.048 "nvme_io": false, 00:03:59.048 "nvme_io_md": false, 00:03:59.048 "write_zeroes": true, 00:03:59.048 "zcopy": true, 00:03:59.048 "get_zone_info": false, 00:03:59.048 "zone_management": false, 00:03:59.048 "zone_append": false, 00:03:59.048 "compare": false, 00:03:59.048 "compare_and_write": false, 00:03:59.048 "abort": true, 00:03:59.048 "seek_hole": false, 00:03:59.048 "seek_data": false, 00:03:59.048 "copy": true, 00:03:59.048 "nvme_iov_md": false 00:03:59.048 }, 00:03:59.048 "memory_domains": [ 00:03:59.048 { 00:03:59.049 "dma_device_id": "system", 00:03:59.049 "dma_device_type": 1 00:03:59.049 }, 00:03:59.049 { 00:03:59.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.049 "dma_device_type": 2 00:03:59.049 } 00:03:59.049 ], 00:03:59.049 "driver_specific": {} 00:03:59.049 }, 00:03:59.049 { 00:03:59.049 "name": "Passthru0", 00:03:59.049 "aliases": [ 00:03:59.049 "0db15477-b7d2-5514-861a-2136c79a0925" 00:03:59.049 ], 00:03:59.049 "product_name": "passthru", 00:03:59.049 "block_size": 512, 00:03:59.049 "num_blocks": 16384, 00:03:59.049 "uuid": "0db15477-b7d2-5514-861a-2136c79a0925", 00:03:59.049 "assigned_rate_limits": { 00:03:59.049 "rw_ios_per_sec": 0, 00:03:59.049 "rw_mbytes_per_sec": 0, 00:03:59.049 "r_mbytes_per_sec": 0, 00:03:59.049 "w_mbytes_per_sec": 0 00:03:59.049 }, 00:03:59.049 "claimed": false, 00:03:59.049 "zoned": false, 00:03:59.049 "supported_io_types": { 00:03:59.049 "read": true, 00:03:59.049 "write": true, 00:03:59.049 "unmap": true, 00:03:59.049 "flush": true, 00:03:59.049 "reset": true, 00:03:59.049 "nvme_admin": false, 00:03:59.049 "nvme_io": false, 00:03:59.049 "nvme_io_md": false, 00:03:59.049 "write_zeroes": true, 00:03:59.049 "zcopy": true, 00:03:59.049 "get_zone_info": false, 00:03:59.049 "zone_management": false, 00:03:59.049 "zone_append": false, 00:03:59.049 "compare": false, 00:03:59.049 "compare_and_write": false, 00:03:59.049 "abort": true, 00:03:59.049 "seek_hole": false, 00:03:59.049 "seek_data": false, 00:03:59.049 "copy": true, 00:03:59.049 "nvme_iov_md": false 00:03:59.049 }, 00:03:59.049 "memory_domains": [ 00:03:59.049 { 00:03:59.049 "dma_device_id": "system", 00:03:59.049 "dma_device_type": 1 00:03:59.049 }, 00:03:59.049 { 00:03:59.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.049 "dma_device_type": 2 00:03:59.049 } 00:03:59.049 ], 00:03:59.049 "driver_specific": { 00:03:59.049 "passthru": { 00:03:59.049 "name": "Passthru0", 00:03:59.049 "base_bdev_name": "Malloc0" 00:03:59.049 } 00:03:59.049 } 00:03:59.049 } 00:03:59.049 ]' 00:03:59.049 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.049 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.049 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.049 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.049 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.049 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:59.049 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:59.049 17:49:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:59.049 00:03:59.049 real 0m0.294s 00:03:59.049 user 0m0.194s 00:03:59.049 sys 0m0.034s 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.049 17:49:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.049 ************************************ 00:03:59.049 END TEST rpc_integrity 00:03:59.049 ************************************ 00:03:59.049 17:49:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:59.049 17:49:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.049 17:49:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.049 17:49:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.310 ************************************ 00:03:59.310 START TEST rpc_plugins 00:03:59.310 ************************************ 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:59.310 { 00:03:59.310 "name": "Malloc1", 00:03:59.310 "aliases": [ 00:03:59.310 "917e82ce-f085-400a-8ff2-99a98da605a9" 00:03:59.310 ], 00:03:59.310 "product_name": "Malloc disk", 00:03:59.310 "block_size": 4096, 00:03:59.310 "num_blocks": 256, 00:03:59.310 "uuid": "917e82ce-f085-400a-8ff2-99a98da605a9", 00:03:59.310 "assigned_rate_limits": { 00:03:59.310 "rw_ios_per_sec": 0, 00:03:59.310 "rw_mbytes_per_sec": 0, 00:03:59.310 "r_mbytes_per_sec": 0, 00:03:59.310 "w_mbytes_per_sec": 0 00:03:59.310 }, 00:03:59.310 "claimed": false, 00:03:59.310 "zoned": false, 00:03:59.310 "supported_io_types": { 00:03:59.310 "read": true, 00:03:59.310 "write": true, 00:03:59.310 "unmap": true, 00:03:59.310 "flush": true, 00:03:59.310 "reset": true, 00:03:59.310 "nvme_admin": false, 00:03:59.310 "nvme_io": false, 00:03:59.310 "nvme_io_md": false, 00:03:59.310 "write_zeroes": true, 00:03:59.310 "zcopy": true, 00:03:59.310 "get_zone_info": false, 00:03:59.310 "zone_management": false, 00:03:59.310 "zone_append": false, 00:03:59.310 "compare": false, 00:03:59.310 "compare_and_write": false, 00:03:59.310 "abort": true, 00:03:59.310 "seek_hole": false, 00:03:59.310 "seek_data": false, 00:03:59.310 "copy": true, 00:03:59.310 "nvme_iov_md": false 00:03:59.310 }, 00:03:59.310 "memory_domains": [ 00:03:59.310 { 00:03:59.310 "dma_device_id": "system", 00:03:59.310 "dma_device_type": 1 00:03:59.310 }, 00:03:59.310 { 00:03:59.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.310 "dma_device_type": 2 00:03:59.310 } 00:03:59.310 ], 00:03:59.310 "driver_specific": {} 00:03:59.310 } 00:03:59.310 ]' 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:59.310 17:49:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:59.310 00:03:59.310 real 0m0.154s 00:03:59.310 user 0m0.094s 00:03:59.310 sys 0m0.020s 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.310 17:49:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.310 ************************************ 00:03:59.310 END TEST rpc_plugins 00:03:59.310 ************************************ 00:03:59.310 17:49:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:59.310 17:49:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.310 17:49:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.310 17:49:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.310 ************************************ 00:03:59.310 START TEST rpc_trace_cmd_test 00:03:59.310 ************************************ 00:03:59.310 17:49:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:59.310 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:59.310 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:59.310 17:49:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.310 17:49:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:59.310 17:49:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.310 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:59.310 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3320743", 00:03:59.310 "tpoint_group_mask": "0x8", 00:03:59.310 "iscsi_conn": { 00:03:59.310 "mask": "0x2", 00:03:59.310 "tpoint_mask": "0x0" 00:03:59.310 }, 00:03:59.310 "scsi": { 00:03:59.310 "mask": "0x4", 00:03:59.310 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "bdev": { 00:03:59.311 "mask": "0x8", 00:03:59.311 "tpoint_mask": "0xffffffffffffffff" 00:03:59.311 }, 00:03:59.311 "nvmf_rdma": { 00:03:59.311 "mask": "0x10", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "nvmf_tcp": { 00:03:59.311 "mask": "0x20", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "ftl": { 00:03:59.311 "mask": "0x40", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "blobfs": { 00:03:59.311 "mask": "0x80", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "dsa": { 00:03:59.311 "mask": "0x200", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "thread": { 00:03:59.311 "mask": "0x400", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "nvme_pcie": { 00:03:59.311 "mask": "0x800", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "iaa": { 00:03:59.311 "mask": "0x1000", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "nvme_tcp": { 00:03:59.311 "mask": "0x2000", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "bdev_nvme": { 00:03:59.311 "mask": "0x4000", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "sock": { 00:03:59.311 "mask": "0x8000", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "blob": { 00:03:59.311 "mask": "0x10000", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "bdev_raid": { 00:03:59.311 "mask": "0x20000", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 }, 00:03:59.311 "scheduler": { 00:03:59.311 "mask": "0x40000", 00:03:59.311 "tpoint_mask": "0x0" 00:03:59.311 } 00:03:59.311 }' 00:03:59.311 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:59.572 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:59.572 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:59.572 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:59.572 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:59.572 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:59.572 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:59.572 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:59.572 17:49:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:59.572 17:49:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:59.572 00:03:59.572 real 0m0.224s 00:03:59.572 user 0m0.185s 00:03:59.572 sys 0m0.030s 00:03:59.572 17:49:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.572 17:49:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:59.572 ************************************ 00:03:59.572 END TEST rpc_trace_cmd_test 00:03:59.572 ************************************ 00:03:59.572 17:49:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:59.572 17:49:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:59.572 17:49:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:59.572 17:49:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.573 17:49:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.573 17:49:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.834 ************************************ 00:03:59.834 START TEST rpc_daemon_integrity 00:03:59.834 ************************************ 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.834 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.834 { 00:03:59.834 "name": "Malloc2", 00:03:59.834 "aliases": [ 00:03:59.834 "f09e7817-e169-404e-a717-53df091dbe5b" 00:03:59.834 ], 00:03:59.834 "product_name": "Malloc disk", 00:03:59.834 "block_size": 512, 00:03:59.834 "num_blocks": 16384, 00:03:59.834 "uuid": "f09e7817-e169-404e-a717-53df091dbe5b", 00:03:59.834 "assigned_rate_limits": { 00:03:59.834 "rw_ios_per_sec": 0, 00:03:59.834 "rw_mbytes_per_sec": 0, 00:03:59.834 "r_mbytes_per_sec": 0, 00:03:59.834 "w_mbytes_per_sec": 0 00:03:59.834 }, 00:03:59.834 "claimed": false, 00:03:59.834 "zoned": false, 00:03:59.834 "supported_io_types": { 00:03:59.834 "read": true, 00:03:59.834 "write": true, 00:03:59.834 "unmap": true, 00:03:59.834 "flush": true, 00:03:59.834 "reset": true, 00:03:59.834 "nvme_admin": false, 00:03:59.834 "nvme_io": false, 00:03:59.834 "nvme_io_md": false, 00:03:59.834 "write_zeroes": true, 00:03:59.834 "zcopy": true, 00:03:59.834 "get_zone_info": false, 00:03:59.834 "zone_management": false, 00:03:59.834 "zone_append": false, 00:03:59.835 "compare": false, 00:03:59.835 "compare_and_write": false, 00:03:59.835 "abort": true, 00:03:59.835 "seek_hole": false, 00:03:59.835 "seek_data": false, 00:03:59.835 "copy": true, 00:03:59.835 "nvme_iov_md": false 00:03:59.835 }, 00:03:59.835 "memory_domains": [ 00:03:59.835 { 00:03:59.835 "dma_device_id": "system", 00:03:59.835 "dma_device_type": 1 00:03:59.835 }, 00:03:59.835 { 00:03:59.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.835 "dma_device_type": 2 00:03:59.835 } 00:03:59.835 ], 00:03:59.835 "driver_specific": {} 00:03:59.835 } 00:03:59.835 ]' 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.835 [2024-11-20 17:49:18.257190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:59.835 [2024-11-20 17:49:18.257218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.835 [2024-11-20 17:49:18.257231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a58250 00:03:59.835 [2024-11-20 17:49:18.257239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.835 [2024-11-20 17:49:18.258490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.835 [2024-11-20 17:49:18.258510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.835 Passthru0 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.835 { 00:03:59.835 "name": "Malloc2", 00:03:59.835 "aliases": [ 00:03:59.835 "f09e7817-e169-404e-a717-53df091dbe5b" 00:03:59.835 ], 00:03:59.835 "product_name": "Malloc disk", 00:03:59.835 "block_size": 512, 00:03:59.835 "num_blocks": 16384, 00:03:59.835 "uuid": "f09e7817-e169-404e-a717-53df091dbe5b", 00:03:59.835 "assigned_rate_limits": { 00:03:59.835 "rw_ios_per_sec": 0, 00:03:59.835 "rw_mbytes_per_sec": 0, 00:03:59.835 "r_mbytes_per_sec": 0, 00:03:59.835 "w_mbytes_per_sec": 0 00:03:59.835 }, 00:03:59.835 "claimed": true, 00:03:59.835 "claim_type": "exclusive_write", 00:03:59.835 "zoned": false, 00:03:59.835 "supported_io_types": { 00:03:59.835 "read": true, 00:03:59.835 "write": true, 00:03:59.835 "unmap": true, 00:03:59.835 "flush": true, 00:03:59.835 "reset": true, 00:03:59.835 "nvme_admin": false, 00:03:59.835 "nvme_io": false, 00:03:59.835 "nvme_io_md": false, 00:03:59.835 "write_zeroes": true, 00:03:59.835 "zcopy": true, 00:03:59.835 "get_zone_info": false, 00:03:59.835 "zone_management": false, 00:03:59.835 "zone_append": false, 00:03:59.835 "compare": false, 00:03:59.835 "compare_and_write": false, 00:03:59.835 "abort": true, 00:03:59.835 "seek_hole": false, 00:03:59.835 "seek_data": false, 00:03:59.835 "copy": true, 00:03:59.835 "nvme_iov_md": false 00:03:59.835 }, 00:03:59.835 "memory_domains": [ 00:03:59.835 { 00:03:59.835 "dma_device_id": "system", 00:03:59.835 "dma_device_type": 1 00:03:59.835 }, 00:03:59.835 { 00:03:59.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.835 "dma_device_type": 2 00:03:59.835 } 00:03:59.835 ], 00:03:59.835 "driver_specific": {} 00:03:59.835 }, 00:03:59.835 { 00:03:59.835 "name": "Passthru0", 00:03:59.835 "aliases": [ 00:03:59.835 "13c20e8b-f946-5b95-bf12-9422a06fc544" 00:03:59.835 ], 00:03:59.835 "product_name": "passthru", 00:03:59.835 "block_size": 512, 00:03:59.835 "num_blocks": 16384, 00:03:59.835 "uuid": "13c20e8b-f946-5b95-bf12-9422a06fc544", 00:03:59.835 "assigned_rate_limits": { 00:03:59.835 "rw_ios_per_sec": 0, 00:03:59.835 "rw_mbytes_per_sec": 0, 00:03:59.835 "r_mbytes_per_sec": 0, 00:03:59.835 "w_mbytes_per_sec": 0 00:03:59.835 }, 00:03:59.835 "claimed": false, 00:03:59.835 "zoned": false, 00:03:59.835 "supported_io_types": { 00:03:59.835 "read": true, 00:03:59.835 "write": true, 00:03:59.835 "unmap": true, 00:03:59.835 "flush": true, 00:03:59.835 "reset": true, 00:03:59.835 "nvme_admin": false, 00:03:59.835 "nvme_io": false, 00:03:59.835 "nvme_io_md": false, 00:03:59.835 "write_zeroes": true, 00:03:59.835 "zcopy": true, 00:03:59.835 "get_zone_info": false, 00:03:59.835 "zone_management": false, 00:03:59.835 "zone_append": false, 00:03:59.835 "compare": false, 00:03:59.835 "compare_and_write": false, 00:03:59.835 "abort": true, 00:03:59.835 "seek_hole": false, 00:03:59.835 "seek_data": false, 00:03:59.835 "copy": true, 00:03:59.835 "nvme_iov_md": false 00:03:59.835 }, 00:03:59.835 "memory_domains": [ 00:03:59.835 { 00:03:59.835 "dma_device_id": "system", 00:03:59.835 "dma_device_type": 1 00:03:59.835 }, 00:03:59.835 { 00:03:59.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.835 "dma_device_type": 2 00:03:59.835 } 00:03:59.835 ], 00:03:59.835 "driver_specific": { 00:03:59.835 "passthru": { 00:03:59.835 "name": "Passthru0", 00:03:59.835 "base_bdev_name": "Malloc2" 00:03:59.835 } 00:03:59.835 } 00:03:59.835 } 00:03:59.835 ]' 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.835 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.096 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.096 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.096 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:00.096 17:49:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.096 00:04:00.096 real 0m0.303s 00:04:00.096 user 0m0.191s 00:04:00.096 sys 0m0.043s 00:04:00.096 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.096 17:49:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.096 ************************************ 00:04:00.096 END TEST rpc_daemon_integrity 00:04:00.096 ************************************ 00:04:00.096 17:49:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:00.096 17:49:18 rpc -- rpc/rpc.sh@84 -- # killprocess 3320743 00:04:00.096 17:49:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 3320743 ']' 00:04:00.096 17:49:18 rpc -- common/autotest_common.sh@958 -- # kill -0 3320743 00:04:00.096 17:49:18 rpc -- common/autotest_common.sh@959 -- # uname 00:04:00.096 17:49:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.096 17:49:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3320743 00:04:00.096 17:49:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.096 17:49:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.096 17:49:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3320743' 00:04:00.096 killing process with pid 3320743 00:04:00.096 17:49:18 rpc -- common/autotest_common.sh@973 -- # kill 3320743 00:04:00.096 17:49:18 rpc -- common/autotest_common.sh@978 -- # wait 3320743 00:04:00.358 00:04:00.358 real 0m2.603s 00:04:00.358 user 0m3.365s 00:04:00.358 sys 0m0.740s 00:04:00.358 17:49:18 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.358 17:49:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.358 ************************************ 00:04:00.358 END TEST rpc 00:04:00.358 ************************************ 00:04:00.358 17:49:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:00.358 17:49:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.358 17:49:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.358 17:49:18 -- common/autotest_common.sh@10 -- # set +x 00:04:00.358 ************************************ 00:04:00.358 START TEST skip_rpc 00:04:00.358 ************************************ 00:04:00.358 17:49:18 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:00.619 * Looking for test storage... 00:04:00.619 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.619 17:49:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.619 --rc genhtml_branch_coverage=1 00:04:00.619 --rc genhtml_function_coverage=1 00:04:00.619 --rc genhtml_legend=1 00:04:00.619 --rc geninfo_all_blocks=1 00:04:00.619 --rc geninfo_unexecuted_blocks=1 00:04:00.619 00:04:00.619 ' 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.619 --rc genhtml_branch_coverage=1 00:04:00.619 --rc genhtml_function_coverage=1 00:04:00.619 --rc genhtml_legend=1 00:04:00.619 --rc geninfo_all_blocks=1 00:04:00.619 --rc geninfo_unexecuted_blocks=1 00:04:00.619 00:04:00.619 ' 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.619 --rc genhtml_branch_coverage=1 00:04:00.619 --rc genhtml_function_coverage=1 00:04:00.619 --rc genhtml_legend=1 00:04:00.619 --rc geninfo_all_blocks=1 00:04:00.619 --rc geninfo_unexecuted_blocks=1 00:04:00.619 00:04:00.619 ' 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.619 --rc genhtml_branch_coverage=1 00:04:00.619 --rc genhtml_function_coverage=1 00:04:00.619 --rc genhtml_legend=1 00:04:00.619 --rc geninfo_all_blocks=1 00:04:00.619 --rc geninfo_unexecuted_blocks=1 00:04:00.619 00:04:00.619 ' 00:04:00.619 17:49:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:00.619 17:49:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:00.619 17:49:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.619 17:49:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.619 ************************************ 00:04:00.619 START TEST skip_rpc 00:04:00.619 ************************************ 00:04:00.619 17:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:00.619 17:49:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3321453 00:04:00.619 17:49:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.619 17:49:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:00.619 17:49:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:00.619 [2024-11-20 17:49:19.092672] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:00.619 [2024-11-20 17:49:19.092717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321453 ] 00:04:00.879 [2024-11-20 17:49:19.171228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.879 [2024-11-20 17:49:19.207141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3321453 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3321453 ']' 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3321453 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3321453 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3321453' 00:04:06.165 killing process with pid 3321453 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3321453 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3321453 00:04:06.165 00:04:06.165 real 0m5.285s 00:04:06.165 user 0m5.088s 00:04:06.165 sys 0m0.248s 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.165 17:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.165 ************************************ 00:04:06.165 END TEST skip_rpc 00:04:06.165 ************************************ 00:04:06.165 17:49:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:06.165 17:49:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.165 17:49:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.165 17:49:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.165 ************************************ 00:04:06.165 START TEST skip_rpc_with_json 00:04:06.165 ************************************ 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3322496 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3322496 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3322496 ']' 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.165 17:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.165 [2024-11-20 17:49:24.461737] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:06.165 [2024-11-20 17:49:24.461790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322496 ] 00:04:06.165 [2024-11-20 17:49:24.542040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.165 [2024-11-20 17:49:24.580253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.737 [2024-11-20 17:49:25.245202] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:06.737 request: 00:04:06.737 { 00:04:06.737 "trtype": "tcp", 00:04:06.737 "method": "nvmf_get_transports", 00:04:06.737 "req_id": 1 00:04:06.737 } 00:04:06.737 Got JSON-RPC error response 00:04:06.737 response: 00:04:06.737 { 00:04:06.737 "code": -19, 00:04:06.737 "message": "No such device" 00:04:06.737 } 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.737 [2024-11-20 17:49:25.257326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.737 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.997 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.997 17:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:06.997 { 00:04:06.997 "subsystems": [ 00:04:06.997 { 00:04:06.997 "subsystem": "fsdev", 00:04:06.997 "config": [ 00:04:06.997 { 00:04:06.997 "method": "fsdev_set_opts", 00:04:06.997 "params": { 00:04:06.997 "fsdev_io_pool_size": 65535, 00:04:06.997 "fsdev_io_cache_size": 256 00:04:06.997 } 00:04:06.997 } 00:04:06.997 ] 00:04:06.997 }, 00:04:06.997 { 00:04:06.997 "subsystem": "keyring", 00:04:06.997 "config": [] 00:04:06.997 }, 00:04:06.997 { 00:04:06.997 "subsystem": "iobuf", 00:04:06.997 "config": [ 00:04:06.997 { 00:04:06.998 "method": "iobuf_set_options", 00:04:06.998 "params": { 00:04:06.998 "small_pool_count": 8192, 00:04:06.998 "large_pool_count": 1024, 00:04:06.998 "small_bufsize": 8192, 00:04:06.998 "large_bufsize": 135168, 00:04:06.998 "enable_numa": false 00:04:06.998 } 00:04:06.998 } 00:04:06.998 ] 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "sock", 00:04:06.998 "config": [ 00:04:06.998 { 00:04:06.998 "method": "sock_set_default_impl", 00:04:06.998 "params": { 00:04:06.998 "impl_name": "posix" 00:04:06.998 } 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "method": "sock_impl_set_options", 00:04:06.998 "params": { 00:04:06.998 "impl_name": "ssl", 00:04:06.998 "recv_buf_size": 4096, 00:04:06.998 "send_buf_size": 4096, 00:04:06.998 "enable_recv_pipe": true, 00:04:06.998 "enable_quickack": false, 00:04:06.998 "enable_placement_id": 0, 00:04:06.998 "enable_zerocopy_send_server": true, 00:04:06.998 "enable_zerocopy_send_client": false, 00:04:06.998 "zerocopy_threshold": 0, 00:04:06.998 "tls_version": 0, 00:04:06.998 "enable_ktls": false 00:04:06.998 } 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "method": "sock_impl_set_options", 00:04:06.998 "params": { 00:04:06.998 "impl_name": "posix", 00:04:06.998 "recv_buf_size": 2097152, 00:04:06.998 "send_buf_size": 2097152, 00:04:06.998 "enable_recv_pipe": true, 00:04:06.998 "enable_quickack": false, 00:04:06.998 "enable_placement_id": 0, 00:04:06.998 "enable_zerocopy_send_server": true, 00:04:06.998 "enable_zerocopy_send_client": false, 00:04:06.998 "zerocopy_threshold": 0, 00:04:06.998 "tls_version": 0, 00:04:06.998 "enable_ktls": false 00:04:06.998 } 00:04:06.998 } 00:04:06.998 ] 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "vmd", 00:04:06.998 "config": [] 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "accel", 00:04:06.998 "config": [ 00:04:06.998 { 00:04:06.998 "method": "accel_set_options", 00:04:06.998 "params": { 00:04:06.998 "small_cache_size": 128, 00:04:06.998 "large_cache_size": 16, 00:04:06.998 "task_count": 2048, 00:04:06.998 "sequence_count": 2048, 00:04:06.998 "buf_count": 2048 00:04:06.998 } 00:04:06.998 } 00:04:06.998 ] 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "bdev", 00:04:06.998 "config": [ 00:04:06.998 { 00:04:06.998 "method": "bdev_set_options", 00:04:06.998 "params": { 00:04:06.998 "bdev_io_pool_size": 65535, 00:04:06.998 "bdev_io_cache_size": 256, 00:04:06.998 "bdev_auto_examine": true, 00:04:06.998 "iobuf_small_cache_size": 128, 00:04:06.998 "iobuf_large_cache_size": 16 00:04:06.998 } 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "method": "bdev_raid_set_options", 00:04:06.998 "params": { 00:04:06.998 "process_window_size_kb": 1024, 00:04:06.998 "process_max_bandwidth_mb_sec": 0 00:04:06.998 } 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "method": "bdev_iscsi_set_options", 00:04:06.998 "params": { 00:04:06.998 "timeout_sec": 30 00:04:06.998 } 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "method": "bdev_nvme_set_options", 00:04:06.998 "params": { 00:04:06.998 "action_on_timeout": "none", 00:04:06.998 "timeout_us": 0, 00:04:06.998 "timeout_admin_us": 0, 00:04:06.998 "keep_alive_timeout_ms": 10000, 00:04:06.998 "arbitration_burst": 0, 00:04:06.998 "low_priority_weight": 0, 00:04:06.998 "medium_priority_weight": 0, 00:04:06.998 "high_priority_weight": 0, 00:04:06.998 "nvme_adminq_poll_period_us": 10000, 00:04:06.998 "nvme_ioq_poll_period_us": 0, 00:04:06.998 "io_queue_requests": 0, 00:04:06.998 "delay_cmd_submit": true, 00:04:06.998 "transport_retry_count": 4, 00:04:06.998 "bdev_retry_count": 3, 00:04:06.998 "transport_ack_timeout": 0, 00:04:06.998 "ctrlr_loss_timeout_sec": 0, 00:04:06.998 "reconnect_delay_sec": 0, 00:04:06.998 "fast_io_fail_timeout_sec": 0, 00:04:06.998 "disable_auto_failback": false, 00:04:06.998 "generate_uuids": false, 00:04:06.998 "transport_tos": 0, 00:04:06.998 "nvme_error_stat": false, 00:04:06.998 "rdma_srq_size": 0, 00:04:06.998 "io_path_stat": false, 00:04:06.998 "allow_accel_sequence": false, 00:04:06.998 "rdma_max_cq_size": 0, 00:04:06.998 "rdma_cm_event_timeout_ms": 0, 00:04:06.998 "dhchap_digests": [ 00:04:06.998 "sha256", 00:04:06.998 "sha384", 00:04:06.998 "sha512" 00:04:06.998 ], 00:04:06.998 "dhchap_dhgroups": [ 00:04:06.998 "null", 00:04:06.998 "ffdhe2048", 00:04:06.998 "ffdhe3072", 00:04:06.998 "ffdhe4096", 00:04:06.998 "ffdhe6144", 00:04:06.998 "ffdhe8192" 00:04:06.998 ] 00:04:06.998 } 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "method": "bdev_nvme_set_hotplug", 00:04:06.998 "params": { 00:04:06.998 "period_us": 100000, 00:04:06.998 "enable": false 00:04:06.998 } 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "method": "bdev_wait_for_examine" 00:04:06.998 } 00:04:06.998 ] 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "scsi", 00:04:06.998 "config": null 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "scheduler", 00:04:06.998 "config": [ 00:04:06.998 { 00:04:06.998 "method": "framework_set_scheduler", 00:04:06.998 "params": { 00:04:06.998 "name": "static" 00:04:06.998 } 00:04:06.998 } 00:04:06.998 ] 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "vhost_scsi", 00:04:06.998 "config": [] 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "vhost_blk", 00:04:06.998 "config": [] 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "ublk", 00:04:06.998 "config": [] 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "nbd", 00:04:06.998 "config": [] 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "subsystem": "nvmf", 00:04:06.998 "config": [ 00:04:06.998 { 00:04:06.998 "method": "nvmf_set_config", 00:04:06.998 "params": { 00:04:06.998 "discovery_filter": "match_any", 00:04:06.998 "admin_cmd_passthru": { 00:04:06.998 "identify_ctrlr": false 00:04:06.998 }, 00:04:06.998 "dhchap_digests": [ 00:04:06.998 "sha256", 00:04:06.998 "sha384", 00:04:06.998 "sha512" 00:04:06.998 ], 00:04:06.998 "dhchap_dhgroups": [ 00:04:06.998 "null", 00:04:06.998 "ffdhe2048", 00:04:06.998 "ffdhe3072", 00:04:06.998 "ffdhe4096", 00:04:06.998 "ffdhe6144", 00:04:06.998 "ffdhe8192" 00:04:06.998 ] 00:04:06.998 } 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "method": "nvmf_set_max_subsystems", 00:04:06.998 "params": { 00:04:06.998 "max_subsystems": 1024 00:04:06.998 } 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "method": "nvmf_set_crdt", 00:04:06.998 "params": { 00:04:06.998 "crdt1": 0, 00:04:06.998 "crdt2": 0, 00:04:06.998 "crdt3": 0 00:04:06.998 } 00:04:06.998 }, 00:04:06.998 { 00:04:06.998 "method": "nvmf_create_transport", 00:04:06.998 "params": { 00:04:06.998 "trtype": "TCP", 00:04:06.998 "max_queue_depth": 128, 00:04:06.998 "max_io_qpairs_per_ctrlr": 127, 00:04:06.998 "in_capsule_data_size": 4096, 00:04:06.998 "max_io_size": 131072, 00:04:06.998 "io_unit_size": 131072, 00:04:06.998 "max_aq_depth": 128, 00:04:06.998 "num_shared_buffers": 511, 00:04:06.999 "buf_cache_size": 4294967295, 00:04:06.999 "dif_insert_or_strip": false, 00:04:06.999 "zcopy": false, 00:04:06.999 "c2h_success": true, 00:04:06.999 "sock_priority": 0, 00:04:06.999 "abort_timeout_sec": 1, 00:04:06.999 "ack_timeout": 0, 00:04:06.999 "data_wr_pool_size": 0 00:04:06.999 } 00:04:06.999 } 00:04:06.999 ] 00:04:06.999 }, 00:04:06.999 { 00:04:06.999 "subsystem": "iscsi", 00:04:06.999 "config": [ 00:04:06.999 { 00:04:06.999 "method": "iscsi_set_options", 00:04:06.999 "params": { 00:04:06.999 "node_base": "iqn.2016-06.io.spdk", 00:04:06.999 "max_sessions": 128, 00:04:06.999 "max_connections_per_session": 2, 00:04:06.999 "max_queue_depth": 64, 00:04:06.999 "default_time2wait": 2, 00:04:06.999 "default_time2retain": 20, 00:04:06.999 "first_burst_length": 8192, 00:04:06.999 "immediate_data": true, 00:04:06.999 "allow_duplicated_isid": false, 00:04:06.999 "error_recovery_level": 0, 00:04:06.999 "nop_timeout": 60, 00:04:06.999 "nop_in_interval": 30, 00:04:06.999 "disable_chap": false, 00:04:06.999 "require_chap": false, 00:04:06.999 "mutual_chap": false, 00:04:06.999 "chap_group": 0, 00:04:06.999 "max_large_datain_per_connection": 64, 00:04:06.999 "max_r2t_per_connection": 4, 00:04:06.999 "pdu_pool_size": 36864, 00:04:06.999 "immediate_data_pool_size": 16384, 00:04:06.999 "data_out_pool_size": 2048 00:04:06.999 } 00:04:06.999 } 00:04:06.999 ] 00:04:06.999 } 00:04:06.999 ] 00:04:06.999 } 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3322496 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3322496 ']' 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3322496 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3322496 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3322496' 00:04:06.999 killing process with pid 3322496 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3322496 00:04:06.999 17:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3322496 00:04:07.258 17:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3322841 00:04:07.258 17:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:07.258 17:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3322841 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3322841 ']' 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3322841 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3322841 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3322841' 00:04:12.545 killing process with pid 3322841 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3322841 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3322841 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:12.545 00:04:12.545 real 0m6.574s 00:04:12.545 user 0m6.469s 00:04:12.545 sys 0m0.565s 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.545 17:49:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.545 ************************************ 00:04:12.545 END TEST skip_rpc_with_json 00:04:12.545 ************************************ 00:04:12.545 17:49:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:12.545 17:49:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.545 17:49:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.545 17:49:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.545 ************************************ 00:04:12.545 START TEST skip_rpc_with_delay 00:04:12.545 ************************************ 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:12.545 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.807 [2024-11-20 17:49:31.127917] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:12.807 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:12.807 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:12.807 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:12.807 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:12.807 00:04:12.807 real 0m0.087s 00:04:12.807 user 0m0.055s 00:04:12.807 sys 0m0.032s 00:04:12.807 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.807 17:49:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:12.807 ************************************ 00:04:12.807 END TEST skip_rpc_with_delay 00:04:12.807 ************************************ 00:04:12.807 17:49:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:12.807 17:49:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:12.807 17:49:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:12.807 17:49:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.807 17:49:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.807 17:49:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.807 ************************************ 00:04:12.807 START TEST exit_on_failed_rpc_init 00:04:12.807 ************************************ 00:04:12.807 17:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:12.807 17:49:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3323915 00:04:12.807 17:49:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3323915 00:04:12.807 17:49:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:12.807 17:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3323915 ']' 00:04:12.807 17:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.807 17:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.807 17:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.807 17:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.807 17:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.807 [2024-11-20 17:49:31.275655] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:12.807 [2024-11-20 17:49:31.275711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323915 ] 00:04:13.067 [2024-11-20 17:49:31.360546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.067 [2024-11-20 17:49:31.401659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:13.639 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.639 [2024-11-20 17:49:32.141301] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:13.639 [2024-11-20 17:49:32.141353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324235 ] 00:04:13.901 [2024-11-20 17:49:32.234898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.901 [2024-11-20 17:49:32.270649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.901 [2024-11-20 17:49:32.270705] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:13.901 [2024-11-20 17:49:32.270715] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:13.901 [2024-11-20 17:49:32.270722] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3323915 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3323915 ']' 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3323915 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3323915 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3323915' 00:04:13.901 killing process with pid 3323915 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3323915 00:04:13.901 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3323915 00:04:14.162 00:04:14.162 real 0m1.360s 00:04:14.162 user 0m1.582s 00:04:14.162 sys 0m0.392s 00:04:14.162 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.162 17:49:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.162 ************************************ 00:04:14.162 END TEST exit_on_failed_rpc_init 00:04:14.162 ************************************ 00:04:14.162 17:49:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:14.162 00:04:14.162 real 0m13.820s 00:04:14.162 user 0m13.422s 00:04:14.162 sys 0m1.549s 00:04:14.162 17:49:32 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.162 17:49:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.162 ************************************ 00:04:14.162 END TEST skip_rpc 00:04:14.162 ************************************ 00:04:14.163 17:49:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:14.163 17:49:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.163 17:49:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.163 17:49:32 -- common/autotest_common.sh@10 -- # set +x 00:04:14.424 ************************************ 00:04:14.424 START TEST rpc_client 00:04:14.424 ************************************ 00:04:14.424 17:49:32 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:14.424 * Looking for test storage... 00:04:14.424 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:14.424 17:49:32 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.424 17:49:32 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.424 17:49:32 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.424 17:49:32 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.424 17:49:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:14.424 17:49:32 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.424 17:49:32 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.424 --rc genhtml_branch_coverage=1 00:04:14.424 --rc genhtml_function_coverage=1 00:04:14.424 --rc genhtml_legend=1 00:04:14.424 --rc geninfo_all_blocks=1 00:04:14.424 --rc geninfo_unexecuted_blocks=1 00:04:14.424 00:04:14.424 ' 00:04:14.424 17:49:32 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.425 --rc genhtml_branch_coverage=1 00:04:14.425 --rc genhtml_function_coverage=1 00:04:14.425 --rc genhtml_legend=1 00:04:14.425 --rc geninfo_all_blocks=1 00:04:14.425 --rc geninfo_unexecuted_blocks=1 00:04:14.425 00:04:14.425 ' 00:04:14.425 17:49:32 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.425 --rc genhtml_branch_coverage=1 00:04:14.425 --rc genhtml_function_coverage=1 00:04:14.425 --rc genhtml_legend=1 00:04:14.425 --rc geninfo_all_blocks=1 00:04:14.425 --rc geninfo_unexecuted_blocks=1 00:04:14.425 00:04:14.425 ' 00:04:14.425 17:49:32 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.425 --rc genhtml_branch_coverage=1 00:04:14.425 --rc genhtml_function_coverage=1 00:04:14.425 --rc genhtml_legend=1 00:04:14.425 --rc geninfo_all_blocks=1 00:04:14.425 --rc geninfo_unexecuted_blocks=1 00:04:14.425 00:04:14.425 ' 00:04:14.425 17:49:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:14.425 OK 00:04:14.425 17:49:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:14.425 00:04:14.425 real 0m0.220s 00:04:14.425 user 0m0.129s 00:04:14.425 sys 0m0.104s 00:04:14.425 17:49:32 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.425 17:49:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:14.425 ************************************ 00:04:14.425 END TEST rpc_client 00:04:14.425 ************************************ 00:04:14.686 17:49:32 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:14.686 17:49:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.686 17:49:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.687 17:49:32 -- common/autotest_common.sh@10 -- # set +x 00:04:14.687 ************************************ 00:04:14.687 START TEST json_config 00:04:14.687 ************************************ 00:04:14.687 17:49:33 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:14.687 17:49:33 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.687 17:49:33 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.687 17:49:33 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.687 17:49:33 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.687 17:49:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.687 17:49:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.687 17:49:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.687 17:49:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.687 17:49:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.687 17:49:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.687 17:49:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.687 17:49:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.687 17:49:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.687 17:49:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.687 17:49:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.687 17:49:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:14.687 17:49:33 json_config -- scripts/common.sh@345 -- # : 1 00:04:14.687 17:49:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.687 17:49:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.687 17:49:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:14.687 17:49:33 json_config -- scripts/common.sh@353 -- # local d=1 00:04:14.687 17:49:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.687 17:49:33 json_config -- scripts/common.sh@355 -- # echo 1 00:04:14.687 17:49:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.687 17:49:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:14.687 17:49:33 json_config -- scripts/common.sh@353 -- # local d=2 00:04:14.687 17:49:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.687 17:49:33 json_config -- scripts/common.sh@355 -- # echo 2 00:04:14.687 17:49:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.687 17:49:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.687 17:49:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.687 17:49:33 json_config -- scripts/common.sh@368 -- # return 0 00:04:14.687 17:49:33 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.687 17:49:33 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.687 --rc genhtml_branch_coverage=1 00:04:14.687 --rc genhtml_function_coverage=1 00:04:14.687 --rc genhtml_legend=1 00:04:14.687 --rc geninfo_all_blocks=1 00:04:14.687 --rc geninfo_unexecuted_blocks=1 00:04:14.687 00:04:14.687 ' 00:04:14.687 17:49:33 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.687 --rc genhtml_branch_coverage=1 00:04:14.687 --rc genhtml_function_coverage=1 00:04:14.687 --rc genhtml_legend=1 00:04:14.687 --rc geninfo_all_blocks=1 00:04:14.687 --rc geninfo_unexecuted_blocks=1 00:04:14.687 00:04:14.687 ' 00:04:14.687 17:49:33 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.687 --rc genhtml_branch_coverage=1 00:04:14.687 --rc genhtml_function_coverage=1 00:04:14.687 --rc genhtml_legend=1 00:04:14.687 --rc geninfo_all_blocks=1 00:04:14.687 --rc geninfo_unexecuted_blocks=1 00:04:14.687 00:04:14.687 ' 00:04:14.687 17:49:33 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.687 --rc genhtml_branch_coverage=1 00:04:14.687 --rc genhtml_function_coverage=1 00:04:14.687 --rc genhtml_legend=1 00:04:14.687 --rc geninfo_all_blocks=1 00:04:14.687 --rc geninfo_unexecuted_blocks=1 00:04:14.687 00:04:14.687 ' 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:14.687 17:49:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:14.687 17:49:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.687 17:49:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.687 17:49:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.687 17:49:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.687 17:49:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.687 17:49:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.687 17:49:33 json_config -- paths/export.sh@5 -- # export PATH 00:04:14.687 17:49:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@51 -- # : 0 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:14.687 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:14.687 17:49:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:14.687 INFO: JSON configuration test init 00:04:14.687 17:49:33 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:14.688 17:49:33 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:14.688 17:49:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.688 17:49:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.688 17:49:33 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:14.688 17:49:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.688 17:49:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.949 17:49:33 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:14.949 17:49:33 json_config -- json_config/common.sh@9 -- # local app=target 00:04:14.949 17:49:33 json_config -- json_config/common.sh@10 -- # shift 00:04:14.949 17:49:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.949 17:49:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.949 17:49:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.949 17:49:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.949 17:49:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.949 17:49:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3324516 00:04:14.949 17:49:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.949 Waiting for target to run... 00:04:14.949 17:49:33 json_config -- json_config/common.sh@25 -- # waitforlisten 3324516 /var/tmp/spdk_tgt.sock 00:04:14.949 17:49:33 json_config -- common/autotest_common.sh@835 -- # '[' -z 3324516 ']' 00:04:14.949 17:49:33 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.949 17:49:33 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.949 17:49:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:14.949 17:49:33 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.949 17:49:33 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.949 17:49:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.949 [2024-11-20 17:49:33.293706] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:14.949 [2024-11-20 17:49:33.293786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324516 ] 00:04:15.211 [2024-11-20 17:49:33.588518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.211 [2024-11-20 17:49:33.616930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.783 17:49:34 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.783 17:49:34 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:15.783 17:49:34 json_config -- json_config/common.sh@26 -- # echo '' 00:04:15.783 00:04:15.783 17:49:34 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:15.783 17:49:34 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:15.783 17:49:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.783 17:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.783 17:49:34 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:15.783 17:49:34 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:15.783 17:49:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.783 17:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.783 17:49:34 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:15.783 17:49:34 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:15.783 17:49:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:16.355 17:49:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.355 17:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:16.355 17:49:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@54 -- # sort 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:16.355 17:49:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:16.355 17:49:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.355 17:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:16.616 17:49:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.616 17:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:16.616 17:49:34 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:16.616 17:49:34 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:04:16.616 17:49:34 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:16.616 17:49:34 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:16.617 17:49:34 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:16.617 17:49:34 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:16.617 17:49:34 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:16.617 17:49:34 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:16.617 17:49:34 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:16.617 17:49:34 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:04:16.617 17:49:34 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:16.617 17:49:34 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:04:16.617 17:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@320 -- # e810=() 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@321 -- # x722=() 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@322 -- # mlx=() 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:24.835 17:49:42 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:04:24.836 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:04:24.836 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:04:24.836 Found net devices under 0000:98:00.0: mlx_0_0 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:04:24.836 Found net devices under 0000:98:00.1: mlx_0_1 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@62 -- # uname 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:04:24.836 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:24.836 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:04:24.836 altname enp152s0f0np0 00:04:24.836 altname ens817f0np0 00:04:24.836 inet 192.168.100.8/24 scope global mlx_0_0 00:04:24.836 valid_lft forever preferred_lft forever 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:04:24.836 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:24.836 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:04:24.836 altname enp152s0f1np1 00:04:24.836 altname ens817f1np1 00:04:24.836 inet 192.168.100.9/24 scope global mlx_0_1 00:04:24.836 valid_lft forever preferred_lft forever 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@450 -- # return 0 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:24.836 17:49:42 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:04:24.837 17:49:42 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:24.837 17:49:42 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:24.837 17:49:42 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:24.837 17:49:42 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:04:24.837 192.168.100.9' 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:04:24.837 192.168.100.9' 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@485 -- # head -n 1 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:04:24.837 192.168.100.9' 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@486 -- # head -n 1 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:04:24.837 17:49:43 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:04:24.837 17:49:43 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:04:24.837 17:49:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:24.837 17:49:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:24.837 MallocForNvmf0 00:04:24.837 17:49:43 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:24.837 17:49:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:25.098 MallocForNvmf1 00:04:25.098 17:49:43 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:25.098 17:49:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:25.098 [2024-11-20 17:49:43.568224] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:25.098 [2024-11-20 17:49:43.603496] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbc5c70/0xa9a7f0) succeed. 00:04:25.098 [2024-11-20 17:49:43.617748] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbc4c60/0xb1a480) succeed. 00:04:25.359 17:49:43 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:25.359 17:49:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:25.359 17:49:43 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:25.359 17:49:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:25.620 17:49:44 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:25.620 17:49:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:25.881 17:49:44 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:25.881 17:49:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:25.881 [2024-11-20 17:49:44.351806] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:25.881 17:49:44 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:25.881 17:49:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.881 17:49:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.142 17:49:44 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:26.142 17:49:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.142 17:49:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.142 17:49:44 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:26.142 17:49:44 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:26.142 17:49:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:26.142 MallocBdevForConfigChangeCheck 00:04:26.142 17:49:44 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:26.142 17:49:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.142 17:49:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.403 17:49:44 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:26.403 17:49:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.663 17:49:44 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:26.663 INFO: shutting down applications... 00:04:26.663 17:49:44 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:26.663 17:49:44 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:26.663 17:49:44 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:26.663 17:49:44 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:26.924 Calling clear_iscsi_subsystem 00:04:26.924 Calling clear_nvmf_subsystem 00:04:26.924 Calling clear_nbd_subsystem 00:04:26.924 Calling clear_ublk_subsystem 00:04:26.924 Calling clear_vhost_blk_subsystem 00:04:26.924 Calling clear_vhost_scsi_subsystem 00:04:26.924 Calling clear_bdev_subsystem 00:04:26.924 17:49:45 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:26.924 17:49:45 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:26.924 17:49:45 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:26.924 17:49:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.924 17:49:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:26.924 17:49:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:27.496 17:49:45 json_config -- json_config/json_config.sh@352 -- # break 00:04:27.496 17:49:45 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:27.496 17:49:45 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:27.496 17:49:45 json_config -- json_config/common.sh@31 -- # local app=target 00:04:27.496 17:49:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:27.496 17:49:45 json_config -- json_config/common.sh@35 -- # [[ -n 3324516 ]] 00:04:27.496 17:49:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3324516 00:04:27.496 17:49:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:27.496 17:49:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.496 17:49:45 json_config -- json_config/common.sh@41 -- # kill -0 3324516 00:04:27.496 17:49:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:27.756 17:49:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:27.756 17:49:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.756 17:49:46 json_config -- json_config/common.sh@41 -- # kill -0 3324516 00:04:27.756 17:49:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:27.756 17:49:46 json_config -- json_config/common.sh@43 -- # break 00:04:27.756 17:49:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:27.756 17:49:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:27.756 SPDK target shutdown done 00:04:27.756 17:49:46 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:27.756 INFO: relaunching applications... 00:04:27.756 17:49:46 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.756 17:49:46 json_config -- json_config/common.sh@9 -- # local app=target 00:04:27.756 17:49:46 json_config -- json_config/common.sh@10 -- # shift 00:04:27.756 17:49:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.756 17:49:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.756 17:49:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.756 17:49:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.756 17:49:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.756 17:49:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3329802 00:04:27.756 17:49:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.756 Waiting for target to run... 00:04:27.756 17:49:46 json_config -- json_config/common.sh@25 -- # waitforlisten 3329802 /var/tmp/spdk_tgt.sock 00:04:27.756 17:49:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.756 17:49:46 json_config -- common/autotest_common.sh@835 -- # '[' -z 3329802 ']' 00:04:27.756 17:49:46 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.756 17:49:46 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.757 17:49:46 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.757 17:49:46 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.757 17:49:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.017 [2024-11-20 17:49:46.326368] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:28.017 [2024-11-20 17:49:46.326429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329802 ] 00:04:28.278 [2024-11-20 17:49:46.569259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.278 [2024-11-20 17:49:46.597647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.850 [2024-11-20 17:49:47.147603] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x237da20/0x2389280) succeed. 00:04:28.850 [2024-11-20 17:49:47.161432] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x237fc10/0x24092c0) succeed. 00:04:28.850 [2024-11-20 17:49:47.215969] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:28.850 17:49:47 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.850 17:49:47 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:28.850 17:49:47 json_config -- json_config/common.sh@26 -- # echo '' 00:04:28.850 00:04:28.850 17:49:47 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:28.850 17:49:47 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:28.850 INFO: Checking if target configuration is the same... 00:04:28.850 17:49:47 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.850 17:49:47 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:28.850 17:49:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.850 + '[' 2 -ne 2 ']' 00:04:28.850 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:28.850 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:28.850 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:28.850 +++ basename /dev/fd/62 00:04:28.850 ++ mktemp /tmp/62.XXX 00:04:28.850 + tmp_file_1=/tmp/62.Mny 00:04:28.850 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.850 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:28.850 + tmp_file_2=/tmp/spdk_tgt_config.json.rRM 00:04:28.850 + ret=0 00:04:28.850 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:29.109 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:29.109 + diff -u /tmp/62.Mny /tmp/spdk_tgt_config.json.rRM 00:04:29.109 + echo 'INFO: JSON config files are the same' 00:04:29.109 INFO: JSON config files are the same 00:04:29.109 + rm /tmp/62.Mny /tmp/spdk_tgt_config.json.rRM 00:04:29.109 + exit 0 00:04:29.109 17:49:47 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:29.109 17:49:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:29.109 INFO: changing configuration and checking if this can be detected... 00:04:29.109 17:49:47 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:29.109 17:49:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:29.369 17:49:47 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.369 17:49:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:29.369 17:49:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.369 + '[' 2 -ne 2 ']' 00:04:29.369 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:29.369 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:29.369 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:29.369 +++ basename /dev/fd/62 00:04:29.369 ++ mktemp /tmp/62.XXX 00:04:29.369 + tmp_file_1=/tmp/62.keF 00:04:29.369 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.369 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:29.369 + tmp_file_2=/tmp/spdk_tgt_config.json.w13 00:04:29.369 + ret=0 00:04:29.369 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:29.630 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:29.630 + diff -u /tmp/62.keF /tmp/spdk_tgt_config.json.w13 00:04:29.630 + ret=1 00:04:29.630 + echo '=== Start of file: /tmp/62.keF ===' 00:04:29.630 + cat /tmp/62.keF 00:04:29.630 + echo '=== End of file: /tmp/62.keF ===' 00:04:29.630 + echo '' 00:04:29.630 + echo '=== Start of file: /tmp/spdk_tgt_config.json.w13 ===' 00:04:29.630 + cat /tmp/spdk_tgt_config.json.w13 00:04:29.630 + echo '=== End of file: /tmp/spdk_tgt_config.json.w13 ===' 00:04:29.630 + echo '' 00:04:29.630 + rm /tmp/62.keF /tmp/spdk_tgt_config.json.w13 00:04:29.630 + exit 1 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:29.630 INFO: configuration change detected. 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:29.630 17:49:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.630 17:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@324 -- # [[ -n 3329802 ]] 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:29.630 17:49:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.630 17:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:29.630 17:49:48 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:29.630 17:49:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.630 17:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.892 17:49:48 json_config -- json_config/json_config.sh@330 -- # killprocess 3329802 00:04:29.892 17:49:48 json_config -- common/autotest_common.sh@954 -- # '[' -z 3329802 ']' 00:04:29.892 17:49:48 json_config -- common/autotest_common.sh@958 -- # kill -0 3329802 00:04:29.892 17:49:48 json_config -- common/autotest_common.sh@959 -- # uname 00:04:29.892 17:49:48 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.892 17:49:48 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3329802 00:04:29.892 17:49:48 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.892 17:49:48 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.892 17:49:48 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3329802' 00:04:29.892 killing process with pid 3329802 00:04:29.892 17:49:48 json_config -- common/autotest_common.sh@973 -- # kill 3329802 00:04:29.892 17:49:48 json_config -- common/autotest_common.sh@978 -- # wait 3329802 00:04:30.153 17:49:48 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.153 17:49:48 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:30.153 17:49:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.153 17:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.153 17:49:48 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:30.153 17:49:48 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:30.153 INFO: Success 00:04:30.153 17:49:48 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:30.153 17:49:48 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:30.153 17:49:48 json_config -- nvmf/common.sh@121 -- # sync 00:04:30.153 17:49:48 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:04:30.153 17:49:48 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:04:30.153 17:49:48 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:04:30.153 17:49:48 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:30.153 17:49:48 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:04:30.153 00:04:30.153 real 0m15.624s 00:04:30.153 user 0m19.397s 00:04:30.153 sys 0m7.965s 00:04:30.153 17:49:48 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.153 17:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.153 ************************************ 00:04:30.153 END TEST json_config 00:04:30.153 ************************************ 00:04:30.153 17:49:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:30.153 17:49:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.153 17:49:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.153 17:49:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.416 ************************************ 00:04:30.416 START TEST json_config_extra_key 00:04:30.416 ************************************ 00:04:30.416 17:49:48 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:30.416 17:49:48 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.416 17:49:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.416 17:49:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.416 17:49:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:30.416 17:49:48 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.416 17:49:48 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.416 --rc genhtml_branch_coverage=1 00:04:30.416 --rc genhtml_function_coverage=1 00:04:30.416 --rc genhtml_legend=1 00:04:30.416 --rc geninfo_all_blocks=1 00:04:30.416 --rc geninfo_unexecuted_blocks=1 00:04:30.416 00:04:30.416 ' 00:04:30.416 17:49:48 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.416 --rc genhtml_branch_coverage=1 00:04:30.416 --rc genhtml_function_coverage=1 00:04:30.416 --rc genhtml_legend=1 00:04:30.416 --rc geninfo_all_blocks=1 00:04:30.416 --rc geninfo_unexecuted_blocks=1 00:04:30.416 00:04:30.416 ' 00:04:30.416 17:49:48 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.416 --rc genhtml_branch_coverage=1 00:04:30.416 --rc genhtml_function_coverage=1 00:04:30.416 --rc genhtml_legend=1 00:04:30.416 --rc geninfo_all_blocks=1 00:04:30.416 --rc geninfo_unexecuted_blocks=1 00:04:30.416 00:04:30.416 ' 00:04:30.416 17:49:48 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.416 --rc genhtml_branch_coverage=1 00:04:30.416 --rc genhtml_function_coverage=1 00:04:30.416 --rc genhtml_legend=1 00:04:30.416 --rc geninfo_all_blocks=1 00:04:30.416 --rc geninfo_unexecuted_blocks=1 00:04:30.416 00:04:30.416 ' 00:04:30.416 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.416 17:49:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.416 17:49:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.416 17:49:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.416 17:49:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.416 17:49:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:30.416 17:49:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.416 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.416 17:49:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.416 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:30.416 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:30.416 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:30.416 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:30.416 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:30.416 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:30.416 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:30.416 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:30.417 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:30.417 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.417 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:30.417 INFO: launching applications... 00:04:30.417 17:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3330583 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.417 Waiting for target to run... 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3330583 /var/tmp/spdk_tgt.sock 00:04:30.417 17:49:48 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3330583 ']' 00:04:30.417 17:49:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:30.417 17:49:48 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.417 17:49:48 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.417 17:49:48 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.417 17:49:48 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.417 17:49:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:30.677 [2024-11-20 17:49:48.962204] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:30.677 [2024-11-20 17:49:48.962257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330583 ] 00:04:30.936 [2024-11-20 17:49:49.293145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.936 [2024-11-20 17:49:49.322758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.508 17:49:49 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.508 17:49:49 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:31.508 17:49:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:31.508 00:04:31.508 17:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:31.508 INFO: shutting down applications... 00:04:31.508 17:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:31.508 17:49:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:31.508 17:49:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.508 17:49:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3330583 ]] 00:04:31.508 17:49:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3330583 00:04:31.508 17:49:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.508 17:49:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.508 17:49:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3330583 00:04:31.508 17:49:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.769 17:49:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.769 17:49:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.769 17:49:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3330583 00:04:31.769 17:49:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:31.769 17:49:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:31.769 17:49:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:31.769 17:49:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:31.769 SPDK target shutdown done 00:04:31.769 17:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:31.769 Success 00:04:31.769 00:04:31.769 real 0m1.571s 00:04:31.769 user 0m1.175s 00:04:31.769 sys 0m0.448s 00:04:31.769 17:49:50 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.769 17:49:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:31.769 ************************************ 00:04:31.769 END TEST json_config_extra_key 00:04:31.769 ************************************ 00:04:32.031 17:49:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:32.031 17:49:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.031 17:49:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.031 17:49:50 -- common/autotest_common.sh@10 -- # set +x 00:04:32.031 ************************************ 00:04:32.031 START TEST alias_rpc 00:04:32.031 ************************************ 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:32.031 * Looking for test storage... 00:04:32.031 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.031 17:49:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.031 --rc genhtml_branch_coverage=1 00:04:32.031 --rc genhtml_function_coverage=1 00:04:32.031 --rc genhtml_legend=1 00:04:32.031 --rc geninfo_all_blocks=1 00:04:32.031 --rc geninfo_unexecuted_blocks=1 00:04:32.031 00:04:32.031 ' 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.031 --rc genhtml_branch_coverage=1 00:04:32.031 --rc genhtml_function_coverage=1 00:04:32.031 --rc genhtml_legend=1 00:04:32.031 --rc geninfo_all_blocks=1 00:04:32.031 --rc geninfo_unexecuted_blocks=1 00:04:32.031 00:04:32.031 ' 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.031 --rc genhtml_branch_coverage=1 00:04:32.031 --rc genhtml_function_coverage=1 00:04:32.031 --rc genhtml_legend=1 00:04:32.031 --rc geninfo_all_blocks=1 00:04:32.031 --rc geninfo_unexecuted_blocks=1 00:04:32.031 00:04:32.031 ' 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.031 --rc genhtml_branch_coverage=1 00:04:32.031 --rc genhtml_function_coverage=1 00:04:32.031 --rc genhtml_legend=1 00:04:32.031 --rc geninfo_all_blocks=1 00:04:32.031 --rc geninfo_unexecuted_blocks=1 00:04:32.031 00:04:32.031 ' 00:04:32.031 17:49:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:32.031 17:49:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3330974 00:04:32.031 17:49:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3330974 00:04:32.031 17:49:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3330974 ']' 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.031 17:49:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.293 [2024-11-20 17:49:50.608550] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:32.293 [2024-11-20 17:49:50.608601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330974 ] 00:04:32.293 [2024-11-20 17:49:50.688361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.293 [2024-11-20 17:49:50.724914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.237 17:49:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:33.237 17:49:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3330974 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3330974 ']' 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3330974 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3330974 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3330974' 00:04:33.237 killing process with pid 3330974 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@973 -- # kill 3330974 00:04:33.237 17:49:51 alias_rpc -- common/autotest_common.sh@978 -- # wait 3330974 00:04:33.498 00:04:33.499 real 0m1.523s 00:04:33.499 user 0m1.681s 00:04:33.499 sys 0m0.412s 00:04:33.499 17:49:51 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.499 17:49:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.499 ************************************ 00:04:33.499 END TEST alias_rpc 00:04:33.499 ************************************ 00:04:33.499 17:49:51 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:33.499 17:49:51 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:33.499 17:49:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.499 17:49:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.499 17:49:51 -- common/autotest_common.sh@10 -- # set +x 00:04:33.499 ************************************ 00:04:33.499 START TEST spdkcli_tcp 00:04:33.499 ************************************ 00:04:33.499 17:49:51 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:33.760 * Looking for test storage... 00:04:33.761 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.761 17:49:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.761 --rc genhtml_branch_coverage=1 00:04:33.761 --rc genhtml_function_coverage=1 00:04:33.761 --rc genhtml_legend=1 00:04:33.761 --rc geninfo_all_blocks=1 00:04:33.761 --rc geninfo_unexecuted_blocks=1 00:04:33.761 00:04:33.761 ' 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.761 --rc genhtml_branch_coverage=1 00:04:33.761 --rc genhtml_function_coverage=1 00:04:33.761 --rc genhtml_legend=1 00:04:33.761 --rc geninfo_all_blocks=1 00:04:33.761 --rc geninfo_unexecuted_blocks=1 00:04:33.761 00:04:33.761 ' 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.761 --rc genhtml_branch_coverage=1 00:04:33.761 --rc genhtml_function_coverage=1 00:04:33.761 --rc genhtml_legend=1 00:04:33.761 --rc geninfo_all_blocks=1 00:04:33.761 --rc geninfo_unexecuted_blocks=1 00:04:33.761 00:04:33.761 ' 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.761 --rc genhtml_branch_coverage=1 00:04:33.761 --rc genhtml_function_coverage=1 00:04:33.761 --rc genhtml_legend=1 00:04:33.761 --rc geninfo_all_blocks=1 00:04:33.761 --rc geninfo_unexecuted_blocks=1 00:04:33.761 00:04:33.761 ' 00:04:33.761 17:49:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:04:33.761 17:49:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:33.761 17:49:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:04:33.761 17:49:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:33.761 17:49:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:33.761 17:49:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:33.761 17:49:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.761 17:49:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3331339 00:04:33.761 17:49:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3331339 00:04:33.761 17:49:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3331339 ']' 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.761 17:49:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.761 [2024-11-20 17:49:52.200205] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:33.761 [2024-11-20 17:49:52.200282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331339 ] 00:04:33.761 [2024-11-20 17:49:52.282931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:34.022 [2024-11-20 17:49:52.326904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.022 [2024-11-20 17:49:52.326932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.592 17:49:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.592 17:49:53 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:34.592 17:49:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3331390 00:04:34.592 17:49:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:34.592 17:49:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:34.853 [ 00:04:34.854 "bdev_malloc_delete", 00:04:34.854 "bdev_malloc_create", 00:04:34.854 "bdev_null_resize", 00:04:34.854 "bdev_null_delete", 00:04:34.854 "bdev_null_create", 00:04:34.854 "bdev_nvme_cuse_unregister", 00:04:34.854 "bdev_nvme_cuse_register", 00:04:34.854 "bdev_opal_new_user", 00:04:34.854 "bdev_opal_set_lock_state", 00:04:34.854 "bdev_opal_delete", 00:04:34.854 "bdev_opal_get_info", 00:04:34.854 "bdev_opal_create", 00:04:34.854 "bdev_nvme_opal_revert", 00:04:34.854 "bdev_nvme_opal_init", 00:04:34.854 "bdev_nvme_send_cmd", 00:04:34.854 "bdev_nvme_set_keys", 00:04:34.854 "bdev_nvme_get_path_iostat", 00:04:34.854 "bdev_nvme_get_mdns_discovery_info", 00:04:34.854 "bdev_nvme_stop_mdns_discovery", 00:04:34.854 "bdev_nvme_start_mdns_discovery", 00:04:34.854 "bdev_nvme_set_multipath_policy", 00:04:34.854 "bdev_nvme_set_preferred_path", 00:04:34.854 "bdev_nvme_get_io_paths", 00:04:34.854 "bdev_nvme_remove_error_injection", 00:04:34.854 "bdev_nvme_add_error_injection", 00:04:34.854 "bdev_nvme_get_discovery_info", 00:04:34.854 "bdev_nvme_stop_discovery", 00:04:34.854 "bdev_nvme_start_discovery", 00:04:34.854 "bdev_nvme_get_controller_health_info", 00:04:34.854 "bdev_nvme_disable_controller", 00:04:34.854 "bdev_nvme_enable_controller", 00:04:34.854 "bdev_nvme_reset_controller", 00:04:34.854 "bdev_nvme_get_transport_statistics", 00:04:34.854 "bdev_nvme_apply_firmware", 00:04:34.854 "bdev_nvme_detach_controller", 00:04:34.854 "bdev_nvme_get_controllers", 00:04:34.854 "bdev_nvme_attach_controller", 00:04:34.854 "bdev_nvme_set_hotplug", 00:04:34.854 "bdev_nvme_set_options", 00:04:34.854 "bdev_passthru_delete", 00:04:34.854 "bdev_passthru_create", 00:04:34.854 "bdev_lvol_set_parent_bdev", 00:04:34.854 "bdev_lvol_set_parent", 00:04:34.854 "bdev_lvol_check_shallow_copy", 00:04:34.854 "bdev_lvol_start_shallow_copy", 00:04:34.854 "bdev_lvol_grow_lvstore", 00:04:34.854 "bdev_lvol_get_lvols", 00:04:34.854 "bdev_lvol_get_lvstores", 00:04:34.854 "bdev_lvol_delete", 00:04:34.854 "bdev_lvol_set_read_only", 00:04:34.854 "bdev_lvol_resize", 00:04:34.854 "bdev_lvol_decouple_parent", 00:04:34.854 "bdev_lvol_inflate", 00:04:34.854 "bdev_lvol_rename", 00:04:34.854 "bdev_lvol_clone_bdev", 00:04:34.854 "bdev_lvol_clone", 00:04:34.854 "bdev_lvol_snapshot", 00:04:34.854 "bdev_lvol_create", 00:04:34.854 "bdev_lvol_delete_lvstore", 00:04:34.854 "bdev_lvol_rename_lvstore", 00:04:34.854 "bdev_lvol_create_lvstore", 00:04:34.854 "bdev_raid_set_options", 00:04:34.854 "bdev_raid_remove_base_bdev", 00:04:34.854 "bdev_raid_add_base_bdev", 00:04:34.854 "bdev_raid_delete", 00:04:34.854 "bdev_raid_create", 00:04:34.854 "bdev_raid_get_bdevs", 00:04:34.854 "bdev_error_inject_error", 00:04:34.854 "bdev_error_delete", 00:04:34.854 "bdev_error_create", 00:04:34.854 "bdev_split_delete", 00:04:34.854 "bdev_split_create", 00:04:34.854 "bdev_delay_delete", 00:04:34.854 "bdev_delay_create", 00:04:34.854 "bdev_delay_update_latency", 00:04:34.854 "bdev_zone_block_delete", 00:04:34.854 "bdev_zone_block_create", 00:04:34.854 "blobfs_create", 00:04:34.854 "blobfs_detect", 00:04:34.854 "blobfs_set_cache_size", 00:04:34.854 "bdev_aio_delete", 00:04:34.854 "bdev_aio_rescan", 00:04:34.854 "bdev_aio_create", 00:04:34.854 "bdev_ftl_set_property", 00:04:34.854 "bdev_ftl_get_properties", 00:04:34.854 "bdev_ftl_get_stats", 00:04:34.854 "bdev_ftl_unmap", 00:04:34.854 "bdev_ftl_unload", 00:04:34.854 "bdev_ftl_delete", 00:04:34.854 "bdev_ftl_load", 00:04:34.854 "bdev_ftl_create", 00:04:34.854 "bdev_virtio_attach_controller", 00:04:34.854 "bdev_virtio_scsi_get_devices", 00:04:34.854 "bdev_virtio_detach_controller", 00:04:34.854 "bdev_virtio_blk_set_hotplug", 00:04:34.854 "bdev_iscsi_delete", 00:04:34.854 "bdev_iscsi_create", 00:04:34.854 "bdev_iscsi_set_options", 00:04:34.854 "accel_error_inject_error", 00:04:34.854 "ioat_scan_accel_module", 00:04:34.854 "dsa_scan_accel_module", 00:04:34.854 "iaa_scan_accel_module", 00:04:34.854 "keyring_file_remove_key", 00:04:34.854 "keyring_file_add_key", 00:04:34.854 "keyring_linux_set_options", 00:04:34.854 "fsdev_aio_delete", 00:04:34.854 "fsdev_aio_create", 00:04:34.854 "iscsi_get_histogram", 00:04:34.854 "iscsi_enable_histogram", 00:04:34.854 "iscsi_set_options", 00:04:34.854 "iscsi_get_auth_groups", 00:04:34.854 "iscsi_auth_group_remove_secret", 00:04:34.854 "iscsi_auth_group_add_secret", 00:04:34.854 "iscsi_delete_auth_group", 00:04:34.854 "iscsi_create_auth_group", 00:04:34.854 "iscsi_set_discovery_auth", 00:04:34.854 "iscsi_get_options", 00:04:34.854 "iscsi_target_node_request_logout", 00:04:34.854 "iscsi_target_node_set_redirect", 00:04:34.854 "iscsi_target_node_set_auth", 00:04:34.854 "iscsi_target_node_add_lun", 00:04:34.854 "iscsi_get_stats", 00:04:34.854 "iscsi_get_connections", 00:04:34.854 "iscsi_portal_group_set_auth", 00:04:34.854 "iscsi_start_portal_group", 00:04:34.854 "iscsi_delete_portal_group", 00:04:34.854 "iscsi_create_portal_group", 00:04:34.854 "iscsi_get_portal_groups", 00:04:34.854 "iscsi_delete_target_node", 00:04:34.854 "iscsi_target_node_remove_pg_ig_maps", 00:04:34.854 "iscsi_target_node_add_pg_ig_maps", 00:04:34.854 "iscsi_create_target_node", 00:04:34.854 "iscsi_get_target_nodes", 00:04:34.854 "iscsi_delete_initiator_group", 00:04:34.854 "iscsi_initiator_group_remove_initiators", 00:04:34.854 "iscsi_initiator_group_add_initiators", 00:04:34.854 "iscsi_create_initiator_group", 00:04:34.854 "iscsi_get_initiator_groups", 00:04:34.854 "nvmf_set_crdt", 00:04:34.854 "nvmf_set_config", 00:04:34.854 "nvmf_set_max_subsystems", 00:04:34.854 "nvmf_stop_mdns_prr", 00:04:34.854 "nvmf_publish_mdns_prr", 00:04:34.854 "nvmf_subsystem_get_listeners", 00:04:34.854 "nvmf_subsystem_get_qpairs", 00:04:34.854 "nvmf_subsystem_get_controllers", 00:04:34.854 "nvmf_get_stats", 00:04:34.854 "nvmf_get_transports", 00:04:34.854 "nvmf_create_transport", 00:04:34.854 "nvmf_get_targets", 00:04:34.854 "nvmf_delete_target", 00:04:34.854 "nvmf_create_target", 00:04:34.854 "nvmf_subsystem_allow_any_host", 00:04:34.854 "nvmf_subsystem_set_keys", 00:04:34.854 "nvmf_subsystem_remove_host", 00:04:34.854 "nvmf_subsystem_add_host", 00:04:34.854 "nvmf_ns_remove_host", 00:04:34.854 "nvmf_ns_add_host", 00:04:34.854 "nvmf_subsystem_remove_ns", 00:04:34.854 "nvmf_subsystem_set_ns_ana_group", 00:04:34.854 "nvmf_subsystem_add_ns", 00:04:34.854 "nvmf_subsystem_listener_set_ana_state", 00:04:34.854 "nvmf_discovery_get_referrals", 00:04:34.854 "nvmf_discovery_remove_referral", 00:04:34.854 "nvmf_discovery_add_referral", 00:04:34.854 "nvmf_subsystem_remove_listener", 00:04:34.854 "nvmf_subsystem_add_listener", 00:04:34.854 "nvmf_delete_subsystem", 00:04:34.854 "nvmf_create_subsystem", 00:04:34.854 "nvmf_get_subsystems", 00:04:34.854 "env_dpdk_get_mem_stats", 00:04:34.854 "nbd_get_disks", 00:04:34.854 "nbd_stop_disk", 00:04:34.854 "nbd_start_disk", 00:04:34.854 "ublk_recover_disk", 00:04:34.854 "ublk_get_disks", 00:04:34.854 "ublk_stop_disk", 00:04:34.855 "ublk_start_disk", 00:04:34.855 "ublk_destroy_target", 00:04:34.855 "ublk_create_target", 00:04:34.855 "virtio_blk_create_transport", 00:04:34.855 "virtio_blk_get_transports", 00:04:34.855 "vhost_controller_set_coalescing", 00:04:34.855 "vhost_get_controllers", 00:04:34.855 "vhost_delete_controller", 00:04:34.855 "vhost_create_blk_controller", 00:04:34.855 "vhost_scsi_controller_remove_target", 00:04:34.855 "vhost_scsi_controller_add_target", 00:04:34.855 "vhost_start_scsi_controller", 00:04:34.855 "vhost_create_scsi_controller", 00:04:34.855 "thread_set_cpumask", 00:04:34.855 "scheduler_set_options", 00:04:34.855 "framework_get_governor", 00:04:34.855 "framework_get_scheduler", 00:04:34.855 "framework_set_scheduler", 00:04:34.855 "framework_get_reactors", 00:04:34.855 "thread_get_io_channels", 00:04:34.855 "thread_get_pollers", 00:04:34.855 "thread_get_stats", 00:04:34.855 "framework_monitor_context_switch", 00:04:34.855 "spdk_kill_instance", 00:04:34.855 "log_enable_timestamps", 00:04:34.855 "log_get_flags", 00:04:34.855 "log_clear_flag", 00:04:34.855 "log_set_flag", 00:04:34.855 "log_get_level", 00:04:34.855 "log_set_level", 00:04:34.855 "log_get_print_level", 00:04:34.855 "log_set_print_level", 00:04:34.855 "framework_enable_cpumask_locks", 00:04:34.855 "framework_disable_cpumask_locks", 00:04:34.855 "framework_wait_init", 00:04:34.855 "framework_start_init", 00:04:34.855 "scsi_get_devices", 00:04:34.855 "bdev_get_histogram", 00:04:34.855 "bdev_enable_histogram", 00:04:34.855 "bdev_set_qos_limit", 00:04:34.855 "bdev_set_qd_sampling_period", 00:04:34.855 "bdev_get_bdevs", 00:04:34.855 "bdev_reset_iostat", 00:04:34.855 "bdev_get_iostat", 00:04:34.855 "bdev_examine", 00:04:34.855 "bdev_wait_for_examine", 00:04:34.855 "bdev_set_options", 00:04:34.855 "accel_get_stats", 00:04:34.855 "accel_set_options", 00:04:34.855 "accel_set_driver", 00:04:34.855 "accel_crypto_key_destroy", 00:04:34.855 "accel_crypto_keys_get", 00:04:34.855 "accel_crypto_key_create", 00:04:34.855 "accel_assign_opc", 00:04:34.855 "accel_get_module_info", 00:04:34.855 "accel_get_opc_assignments", 00:04:34.855 "vmd_rescan", 00:04:34.855 "vmd_remove_device", 00:04:34.855 "vmd_enable", 00:04:34.855 "sock_get_default_impl", 00:04:34.855 "sock_set_default_impl", 00:04:34.855 "sock_impl_set_options", 00:04:34.855 "sock_impl_get_options", 00:04:34.855 "iobuf_get_stats", 00:04:34.855 "iobuf_set_options", 00:04:34.855 "keyring_get_keys", 00:04:34.855 "framework_get_pci_devices", 00:04:34.855 "framework_get_config", 00:04:34.855 "framework_get_subsystems", 00:04:34.855 "fsdev_set_opts", 00:04:34.855 "fsdev_get_opts", 00:04:34.855 "trace_get_info", 00:04:34.855 "trace_get_tpoint_group_mask", 00:04:34.855 "trace_disable_tpoint_group", 00:04:34.855 "trace_enable_tpoint_group", 00:04:34.855 "trace_clear_tpoint_mask", 00:04:34.855 "trace_set_tpoint_mask", 00:04:34.855 "notify_get_notifications", 00:04:34.855 "notify_get_types", 00:04:34.855 "spdk_get_version", 00:04:34.855 "rpc_get_methods" 00:04:34.855 ] 00:04:34.855 17:49:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:34.855 17:49:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:34.855 17:49:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3331339 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3331339 ']' 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3331339 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3331339 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3331339' 00:04:34.855 killing process with pid 3331339 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3331339 00:04:34.855 17:49:53 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3331339 00:04:35.117 00:04:35.117 real 0m1.533s 00:04:35.117 user 0m2.811s 00:04:35.117 sys 0m0.457s 00:04:35.117 17:49:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.117 17:49:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.117 ************************************ 00:04:35.117 END TEST spdkcli_tcp 00:04:35.117 ************************************ 00:04:35.117 17:49:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:35.117 17:49:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.117 17:49:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.117 17:49:53 -- common/autotest_common.sh@10 -- # set +x 00:04:35.117 ************************************ 00:04:35.117 START TEST dpdk_mem_utility 00:04:35.117 ************************************ 00:04:35.117 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:35.117 * Looking for test storage... 00:04:35.379 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.379 17:49:53 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.379 --rc genhtml_branch_coverage=1 00:04:35.379 --rc genhtml_function_coverage=1 00:04:35.379 --rc genhtml_legend=1 00:04:35.379 --rc geninfo_all_blocks=1 00:04:35.379 --rc geninfo_unexecuted_blocks=1 00:04:35.379 00:04:35.379 ' 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.379 --rc genhtml_branch_coverage=1 00:04:35.379 --rc genhtml_function_coverage=1 00:04:35.379 --rc genhtml_legend=1 00:04:35.379 --rc geninfo_all_blocks=1 00:04:35.379 --rc geninfo_unexecuted_blocks=1 00:04:35.379 00:04:35.379 ' 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.379 --rc genhtml_branch_coverage=1 00:04:35.379 --rc genhtml_function_coverage=1 00:04:35.379 --rc genhtml_legend=1 00:04:35.379 --rc geninfo_all_blocks=1 00:04:35.379 --rc geninfo_unexecuted_blocks=1 00:04:35.379 00:04:35.379 ' 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.379 --rc genhtml_branch_coverage=1 00:04:35.379 --rc genhtml_function_coverage=1 00:04:35.379 --rc genhtml_legend=1 00:04:35.379 --rc geninfo_all_blocks=1 00:04:35.379 --rc geninfo_unexecuted_blocks=1 00:04:35.379 00:04:35.379 ' 00:04:35.379 17:49:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:35.379 17:49:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3331701 00:04:35.379 17:49:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3331701 00:04:35.379 17:49:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3331701 ']' 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.379 17:49:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:35.379 [2024-11-20 17:49:53.814575] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:35.379 [2024-11-20 17:49:53.814650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331701 ] 00:04:35.380 [2024-11-20 17:49:53.897709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.640 [2024-11-20 17:49:53.940247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.212 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.212 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:36.212 17:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:36.212 17:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:36.212 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.212 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.212 { 00:04:36.212 "filename": "/tmp/spdk_mem_dump.txt" 00:04:36.212 } 00:04:36.212 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.212 17:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:36.212 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:36.212 1 heaps totaling size 818.000000 MiB 00:04:36.212 size: 818.000000 MiB heap id: 0 00:04:36.212 end heaps---------- 00:04:36.212 9 mempools totaling size 603.782043 MiB 00:04:36.212 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:36.212 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:36.212 size: 100.555481 MiB name: bdev_io_3331701 00:04:36.212 size: 50.003479 MiB name: msgpool_3331701 00:04:36.212 size: 36.509338 MiB name: fsdev_io_3331701 00:04:36.212 size: 21.763794 MiB name: PDU_Pool 00:04:36.212 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:36.212 size: 4.133484 MiB name: evtpool_3331701 00:04:36.212 size: 0.026123 MiB name: Session_Pool 00:04:36.212 end mempools------- 00:04:36.212 6 memzones totaling size 4.142822 MiB 00:04:36.212 size: 1.000366 MiB name: RG_ring_0_3331701 00:04:36.212 size: 1.000366 MiB name: RG_ring_1_3331701 00:04:36.212 size: 1.000366 MiB name: RG_ring_4_3331701 00:04:36.212 size: 1.000366 MiB name: RG_ring_5_3331701 00:04:36.212 size: 0.125366 MiB name: RG_ring_2_3331701 00:04:36.212 size: 0.015991 MiB name: RG_ring_3_3331701 00:04:36.212 end memzones------- 00:04:36.212 17:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:36.212 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:36.212 list of free elements. size: 10.852478 MiB 00:04:36.212 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:36.212 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:36.212 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:36.212 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:36.212 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:36.212 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:36.212 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:36.212 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:36.212 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:36.212 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:36.212 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:36.212 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:36.212 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:36.212 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:36.212 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:36.212 list of standard malloc elements. size: 199.218628 MiB 00:04:36.212 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:36.212 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:36.212 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:36.212 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:36.212 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:36.212 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:36.212 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:36.212 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:36.212 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:36.212 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:36.212 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:36.212 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:36.212 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:36.212 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:36.212 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:36.212 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:36.212 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:36.212 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:36.212 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:36.212 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:36.212 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:36.212 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:36.212 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:36.212 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:36.212 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:36.212 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:36.212 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:36.212 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:36.212 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:36.212 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:36.213 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:36.213 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:36.213 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:36.213 list of memzone associated elements. size: 607.928894 MiB 00:04:36.213 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:36.213 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:36.213 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:36.213 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:36.213 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:36.213 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3331701_0 00:04:36.213 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:36.213 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3331701_0 00:04:36.213 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:36.213 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3331701_0 00:04:36.213 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:36.213 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:36.213 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:36.213 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:36.213 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:36.213 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3331701_0 00:04:36.213 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:36.213 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3331701 00:04:36.213 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:36.213 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3331701 00:04:36.213 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:36.213 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:36.213 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:36.213 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:36.213 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:36.213 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:36.213 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:36.213 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:36.213 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:36.213 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3331701 00:04:36.213 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:36.213 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3331701 00:04:36.213 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:36.213 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3331701 00:04:36.213 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:36.213 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3331701 00:04:36.213 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:36.213 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3331701 00:04:36.213 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:36.213 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3331701 00:04:36.213 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:36.213 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:36.213 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:36.213 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:36.213 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:36.213 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:36.213 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:36.213 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3331701 00:04:36.213 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:36.213 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3331701 00:04:36.213 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:36.213 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:36.213 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:36.213 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:36.213 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:36.213 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3331701 00:04:36.213 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:36.213 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:36.213 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:36.213 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3331701 00:04:36.213 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:36.213 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3331701 00:04:36.213 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:36.213 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3331701 00:04:36.213 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:36.213 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:36.213 17:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:36.213 17:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3331701 00:04:36.213 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3331701 ']' 00:04:36.213 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3331701 00:04:36.213 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:36.213 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.213 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3331701 00:04:36.473 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.473 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.473 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3331701' 00:04:36.473 killing process with pid 3331701 00:04:36.473 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3331701 00:04:36.473 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3331701 00:04:36.473 00:04:36.473 real 0m1.397s 00:04:36.473 user 0m1.462s 00:04:36.473 sys 0m0.410s 00:04:36.474 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.474 17:49:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.474 ************************************ 00:04:36.474 END TEST dpdk_mem_utility 00:04:36.474 ************************************ 00:04:36.474 17:49:54 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:36.474 17:49:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.474 17:49:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.474 17:49:55 -- common/autotest_common.sh@10 -- # set +x 00:04:36.735 ************************************ 00:04:36.735 START TEST event 00:04:36.735 ************************************ 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:36.735 * Looking for test storage... 00:04:36.735 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.735 17:49:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.735 17:49:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.735 17:49:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.735 17:49:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.735 17:49:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.735 17:49:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.735 17:49:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.735 17:49:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.735 17:49:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.735 17:49:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.735 17:49:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.735 17:49:55 event -- scripts/common.sh@344 -- # case "$op" in 00:04:36.735 17:49:55 event -- scripts/common.sh@345 -- # : 1 00:04:36.735 17:49:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.735 17:49:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.735 17:49:55 event -- scripts/common.sh@365 -- # decimal 1 00:04:36.735 17:49:55 event -- scripts/common.sh@353 -- # local d=1 00:04:36.735 17:49:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.735 17:49:55 event -- scripts/common.sh@355 -- # echo 1 00:04:36.735 17:49:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.735 17:49:55 event -- scripts/common.sh@366 -- # decimal 2 00:04:36.735 17:49:55 event -- scripts/common.sh@353 -- # local d=2 00:04:36.735 17:49:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.735 17:49:55 event -- scripts/common.sh@355 -- # echo 2 00:04:36.735 17:49:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.735 17:49:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.735 17:49:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.735 17:49:55 event -- scripts/common.sh@368 -- # return 0 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.735 --rc genhtml_branch_coverage=1 00:04:36.735 --rc genhtml_function_coverage=1 00:04:36.735 --rc genhtml_legend=1 00:04:36.735 --rc geninfo_all_blocks=1 00:04:36.735 --rc geninfo_unexecuted_blocks=1 00:04:36.735 00:04:36.735 ' 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.735 --rc genhtml_branch_coverage=1 00:04:36.735 --rc genhtml_function_coverage=1 00:04:36.735 --rc genhtml_legend=1 00:04:36.735 --rc geninfo_all_blocks=1 00:04:36.735 --rc geninfo_unexecuted_blocks=1 00:04:36.735 00:04:36.735 ' 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.735 --rc genhtml_branch_coverage=1 00:04:36.735 --rc genhtml_function_coverage=1 00:04:36.735 --rc genhtml_legend=1 00:04:36.735 --rc geninfo_all_blocks=1 00:04:36.735 --rc geninfo_unexecuted_blocks=1 00:04:36.735 00:04:36.735 ' 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.735 --rc genhtml_branch_coverage=1 00:04:36.735 --rc genhtml_function_coverage=1 00:04:36.735 --rc genhtml_legend=1 00:04:36.735 --rc geninfo_all_blocks=1 00:04:36.735 --rc geninfo_unexecuted_blocks=1 00:04:36.735 00:04:36.735 ' 00:04:36.735 17:49:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:36.735 17:49:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:36.735 17:49:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:36.735 17:49:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.735 17:49:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.997 ************************************ 00:04:36.997 START TEST event_perf 00:04:36.997 ************************************ 00:04:36.997 17:49:55 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:36.998 Running I/O for 1 seconds...[2024-11-20 17:49:55.301157] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:36.998 [2024-11-20 17:49:55.301265] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332011 ] 00:04:36.998 [2024-11-20 17:49:55.389036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:36.998 [2024-11-20 17:49:55.434728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.998 [2024-11-20 17:49:55.434843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.998 [2024-11-20 17:49:55.435000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.998 Running I/O for 1 seconds...[2024-11-20 17:49:55.435000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:37.940 00:04:37.940 lcore 0: 181347 00:04:37.940 lcore 1: 181347 00:04:37.940 lcore 2: 181345 00:04:37.940 lcore 3: 181347 00:04:37.940 done. 00:04:37.940 00:04:37.940 real 0m1.190s 00:04:37.940 user 0m4.107s 00:04:37.940 sys 0m0.082s 00:04:37.940 17:49:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.940 17:49:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.940 ************************************ 00:04:37.940 END TEST event_perf 00:04:37.940 ************************************ 00:04:38.201 17:49:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:38.201 17:49:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:38.201 17:49:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.201 17:49:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.201 ************************************ 00:04:38.201 START TEST event_reactor 00:04:38.201 ************************************ 00:04:38.201 17:49:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:38.201 [2024-11-20 17:49:56.548640] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:38.201 [2024-11-20 17:49:56.548675] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332231 ] 00:04:38.201 [2024-11-20 17:49:56.615942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.201 [2024-11-20 17:49:56.650724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.586 test_start 00:04:39.586 oneshot 00:04:39.586 tick 100 00:04:39.586 tick 100 00:04:39.586 tick 250 00:04:39.586 tick 100 00:04:39.586 tick 100 00:04:39.586 tick 100 00:04:39.586 tick 250 00:04:39.586 tick 500 00:04:39.586 tick 100 00:04:39.586 tick 100 00:04:39.586 tick 250 00:04:39.586 tick 100 00:04:39.586 tick 100 00:04:39.586 test_end 00:04:39.586 00:04:39.586 real 0m1.140s 00:04:39.586 user 0m1.088s 00:04:39.586 sys 0m0.047s 00:04:39.586 17:49:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.586 17:49:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:39.586 ************************************ 00:04:39.586 END TEST event_reactor 00:04:39.586 ************************************ 00:04:39.586 17:49:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:39.586 17:49:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:39.586 17:49:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.586 17:49:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.586 ************************************ 00:04:39.586 START TEST event_reactor_perf 00:04:39.586 ************************************ 00:04:39.586 17:49:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:39.586 [2024-11-20 17:49:57.777059] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:39.586 [2024-11-20 17:49:57.777163] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332581 ] 00:04:39.586 [2024-11-20 17:49:57.857858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.586 [2024-11-20 17:49:57.892435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.529 test_start 00:04:40.529 test_end 00:04:40.529 Performance: 369107 events per second 00:04:40.529 00:04:40.529 real 0m1.168s 00:04:40.529 user 0m1.100s 00:04:40.529 sys 0m0.063s 00:04:40.529 17:49:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.529 17:49:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.529 ************************************ 00:04:40.529 END TEST event_reactor_perf 00:04:40.529 ************************************ 00:04:40.529 17:49:58 event -- event/event.sh@49 -- # uname -s 00:04:40.529 17:49:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:40.529 17:49:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:40.529 17:49:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.529 17:49:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.529 17:49:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.529 ************************************ 00:04:40.529 START TEST event_scheduler 00:04:40.529 ************************************ 00:04:40.529 17:49:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:40.795 * Looking for test storage... 00:04:40.795 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.795 17:49:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.795 --rc genhtml_branch_coverage=1 00:04:40.795 --rc genhtml_function_coverage=1 00:04:40.795 --rc genhtml_legend=1 00:04:40.795 --rc geninfo_all_blocks=1 00:04:40.795 --rc geninfo_unexecuted_blocks=1 00:04:40.795 00:04:40.795 ' 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.795 --rc genhtml_branch_coverage=1 00:04:40.795 --rc genhtml_function_coverage=1 00:04:40.795 --rc genhtml_legend=1 00:04:40.795 --rc geninfo_all_blocks=1 00:04:40.795 --rc geninfo_unexecuted_blocks=1 00:04:40.795 00:04:40.795 ' 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.795 --rc genhtml_branch_coverage=1 00:04:40.795 --rc genhtml_function_coverage=1 00:04:40.795 --rc genhtml_legend=1 00:04:40.795 --rc geninfo_all_blocks=1 00:04:40.795 --rc geninfo_unexecuted_blocks=1 00:04:40.795 00:04:40.795 ' 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.795 --rc genhtml_branch_coverage=1 00:04:40.795 --rc genhtml_function_coverage=1 00:04:40.795 --rc genhtml_legend=1 00:04:40.795 --rc geninfo_all_blocks=1 00:04:40.795 --rc geninfo_unexecuted_blocks=1 00:04:40.795 00:04:40.795 ' 00:04:40.795 17:49:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:40.795 17:49:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3332967 00:04:40.795 17:49:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.795 17:49:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3332967 00:04:40.795 17:49:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3332967 ']' 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.795 17:49:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.796 17:49:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.796 17:49:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.796 [2024-11-20 17:49:59.249459] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:40.796 [2024-11-20 17:49:59.249511] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332967 ] 00:04:40.796 [2024-11-20 17:49:59.326895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.056 [2024-11-20 17:49:59.359520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.056 [2024-11-20 17:49:59.359673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.056 [2024-11-20 17:49:59.359824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.056 [2024-11-20 17:49:59.359825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.627 17:50:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.627 17:50:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:41.627 17:50:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:41.627 17:50:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.627 17:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.627 [2024-11-20 17:50:00.061977] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:41.627 [2024-11-20 17:50:00.061994] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:41.627 [2024-11-20 17:50:00.062002] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:41.627 [2024-11-20 17:50:00.062006] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:41.627 [2024-11-20 17:50:00.062010] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:41.627 17:50:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.627 17:50:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:41.627 17:50:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.627 17:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.628 [2024-11-20 17:50:00.119719] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:41.628 17:50:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.628 17:50:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:41.628 17:50:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.628 17:50:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.628 17:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.628 ************************************ 00:04:41.628 START TEST scheduler_create_thread 00:04:41.628 ************************************ 00:04:41.628 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:41.628 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:41.628 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.628 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.889 2 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.889 3 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.889 4 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.889 5 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.889 6 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.889 7 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.889 8 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.889 9 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.889 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.462 10 00:04:42.462 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.462 17:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:42.462 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.462 17:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.848 17:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.848 17:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:43.848 17:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:43.848 17:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.848 17:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.420 17:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.420 17:50:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:44.420 17:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.420 17:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.362 17:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.362 17:50:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:45.362 17:50:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:45.362 17:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.362 17:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.005 17:50:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.005 00:04:46.005 real 0m4.225s 00:04:46.005 user 0m0.022s 00:04:46.005 sys 0m0.009s 00:04:46.005 17:50:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.005 17:50:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.006 ************************************ 00:04:46.006 END TEST scheduler_create_thread 00:04:46.006 ************************************ 00:04:46.006 17:50:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:46.006 17:50:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3332967 00:04:46.006 17:50:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3332967 ']' 00:04:46.006 17:50:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3332967 00:04:46.006 17:50:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:46.006 17:50:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.006 17:50:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3332967 00:04:46.006 17:50:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:46.006 17:50:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:46.006 17:50:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3332967' 00:04:46.006 killing process with pid 3332967 00:04:46.006 17:50:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3332967 00:04:46.006 17:50:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3332967 00:04:46.286 [2024-11-20 17:50:04.664970] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:46.595 00:04:46.595 real 0m5.820s 00:04:46.595 user 0m12.986s 00:04:46.595 sys 0m0.424s 00:04:46.595 17:50:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.595 17:50:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.595 ************************************ 00:04:46.595 END TEST event_scheduler 00:04:46.595 ************************************ 00:04:46.595 17:50:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:46.595 17:50:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:46.595 17:50:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.595 17:50:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.595 17:50:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.595 ************************************ 00:04:46.595 START TEST app_repeat 00:04:46.595 ************************************ 00:04:46.595 17:50:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3334043 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3334043' 00:04:46.595 Process app_repeat pid: 3334043 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:46.595 spdk_app_start Round 0 00:04:46.595 17:50:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3334043 /var/tmp/spdk-nbd.sock 00:04:46.595 17:50:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3334043 ']' 00:04:46.595 17:50:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.595 17:50:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.595 17:50:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.595 17:50:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.595 17:50:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.595 [2024-11-20 17:50:04.941737] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:46.595 [2024-11-20 17:50:04.941805] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334043 ] 00:04:46.595 [2024-11-20 17:50:05.021207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.595 [2024-11-20 17:50:05.058181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.595 [2024-11-20 17:50:05.058182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.595 17:50:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.595 17:50:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:46.595 17:50:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.856 Malloc0 00:04:46.856 17:50:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.118 Malloc1 00:04:47.118 17:50:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.118 17:50:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.378 /dev/nbd0 00:04:47.378 17:50:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.378 17:50:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.378 1+0 records in 00:04:47.378 1+0 records out 00:04:47.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224722 s, 18.2 MB/s 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:47.378 17:50:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:47.378 17:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.378 17:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.378 17:50:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.378 /dev/nbd1 00:04:47.641 17:50:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:47.641 17:50:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.641 1+0 records in 00:04:47.641 1+0 records out 00:04:47.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280963 s, 14.6 MB/s 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:47.641 17:50:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:47.641 17:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.641 17:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.641 17:50:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.641 17:50:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.641 17:50:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.641 { 00:04:47.641 "nbd_device": "/dev/nbd0", 00:04:47.641 "bdev_name": "Malloc0" 00:04:47.641 }, 00:04:47.641 { 00:04:47.641 "nbd_device": "/dev/nbd1", 00:04:47.641 "bdev_name": "Malloc1" 00:04:47.641 } 00:04:47.641 ]' 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.641 { 00:04:47.641 "nbd_device": "/dev/nbd0", 00:04:47.641 "bdev_name": "Malloc0" 00:04:47.641 }, 00:04:47.641 { 00:04:47.641 "nbd_device": "/dev/nbd1", 00:04:47.641 "bdev_name": "Malloc1" 00:04:47.641 } 00:04:47.641 ]' 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.641 /dev/nbd1' 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.641 /dev/nbd1' 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.641 256+0 records in 00:04:47.641 256+0 records out 00:04:47.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126199 s, 83.1 MB/s 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.641 17:50:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.903 256+0 records in 00:04:47.903 256+0 records out 00:04:47.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164358 s, 63.8 MB/s 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.903 256+0 records in 00:04:47.903 256+0 records out 00:04:47.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205466 s, 51.0 MB/s 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.903 17:50:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.163 17:50:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.423 17:50:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.423 17:50:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.683 17:50:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.683 [2024-11-20 17:50:07.148110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.683 [2024-11-20 17:50:07.185038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.683 [2024-11-20 17:50:07.185199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.683 [2024-11-20 17:50:07.216972] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.683 [2024-11-20 17:50:07.217011] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:51.982 17:50:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.982 17:50:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:51.982 spdk_app_start Round 1 00:04:51.982 17:50:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3334043 /var/tmp/spdk-nbd.sock 00:04:51.982 17:50:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3334043 ']' 00:04:51.982 17:50:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.982 17:50:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.982 17:50:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.982 17:50:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.982 17:50:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.982 17:50:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.982 17:50:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:51.982 17:50:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.982 Malloc0 00:04:51.982 17:50:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.243 Malloc1 00:04:52.243 17:50:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.243 /dev/nbd0 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.243 1+0 records in 00:04:52.243 1+0 records out 00:04:52.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309739 s, 13.2 MB/s 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.243 17:50:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.243 17:50:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.504 /dev/nbd1 00:04:52.504 17:50:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.504 17:50:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.504 1+0 records in 00:04:52.504 1+0 records out 00:04:52.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279946 s, 14.6 MB/s 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.504 17:50:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.504 17:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.504 17:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.504 17:50:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.504 17:50:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.504 17:50:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.766 { 00:04:52.766 "nbd_device": "/dev/nbd0", 00:04:52.766 "bdev_name": "Malloc0" 00:04:52.766 }, 00:04:52.766 { 00:04:52.766 "nbd_device": "/dev/nbd1", 00:04:52.766 "bdev_name": "Malloc1" 00:04:52.766 } 00:04:52.766 ]' 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.766 { 00:04:52.766 "nbd_device": "/dev/nbd0", 00:04:52.766 "bdev_name": "Malloc0" 00:04:52.766 }, 00:04:52.766 { 00:04:52.766 "nbd_device": "/dev/nbd1", 00:04:52.766 "bdev_name": "Malloc1" 00:04:52.766 } 00:04:52.766 ]' 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.766 /dev/nbd1' 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.766 /dev/nbd1' 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.766 256+0 records in 00:04:52.766 256+0 records out 00:04:52.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121415 s, 86.4 MB/s 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.766 256+0 records in 00:04:52.766 256+0 records out 00:04:52.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162575 s, 64.5 MB/s 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.766 256+0 records in 00:04:52.766 256+0 records out 00:04:52.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184482 s, 56.8 MB/s 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.766 17:50:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.028 17:50:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.028 17:50:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.028 17:50:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.028 17:50:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.028 17:50:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.028 17:50:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.028 17:50:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.028 17:50:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.028 17:50:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.028 17:50:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.288 17:50:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.548 17:50:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.548 17:50:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.548 17:50:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.809 [2024-11-20 17:50:12.185008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.809 [2024-11-20 17:50:12.222062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.809 [2024-11-20 17:50:12.222064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.809 [2024-11-20 17:50:12.254630] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.809 [2024-11-20 17:50:12.254666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.111 17:50:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.111 17:50:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:57.111 spdk_app_start Round 2 00:04:57.111 17:50:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3334043 /var/tmp/spdk-nbd.sock 00:04:57.111 17:50:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3334043 ']' 00:04:57.111 17:50:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.111 17:50:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.111 17:50:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.111 17:50:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.111 17:50:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.111 17:50:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.111 17:50:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:57.111 17:50:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.111 Malloc0 00:04:57.111 17:50:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.111 Malloc1 00:04:57.111 17:50:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.111 17:50:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.111 17:50:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.111 17:50:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.111 17:50:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.112 17:50:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.374 /dev/nbd0 00:04:57.374 17:50:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.374 17:50:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.374 1+0 records in 00:04:57.374 1+0 records out 00:04:57.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272184 s, 15.0 MB/s 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.374 17:50:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.374 17:50:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.374 17:50:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.374 17:50:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.635 /dev/nbd1 00:04:57.635 17:50:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.635 17:50:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.635 1+0 records in 00:04:57.635 1+0 records out 00:04:57.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033481 s, 12.2 MB/s 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.635 17:50:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.635 17:50:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.635 17:50:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.635 17:50:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.635 17:50:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.635 17:50:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.897 { 00:04:57.897 "nbd_device": "/dev/nbd0", 00:04:57.897 "bdev_name": "Malloc0" 00:04:57.897 }, 00:04:57.897 { 00:04:57.897 "nbd_device": "/dev/nbd1", 00:04:57.897 "bdev_name": "Malloc1" 00:04:57.897 } 00:04:57.897 ]' 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.897 { 00:04:57.897 "nbd_device": "/dev/nbd0", 00:04:57.897 "bdev_name": "Malloc0" 00:04:57.897 }, 00:04:57.897 { 00:04:57.897 "nbd_device": "/dev/nbd1", 00:04:57.897 "bdev_name": "Malloc1" 00:04:57.897 } 00:04:57.897 ]' 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.897 /dev/nbd1' 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.897 /dev/nbd1' 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.897 256+0 records in 00:04:57.897 256+0 records out 00:04:57.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125408 s, 83.6 MB/s 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.897 256+0 records in 00:04:57.897 256+0 records out 00:04:57.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016316 s, 64.3 MB/s 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.897 256+0 records in 00:04:57.897 256+0 records out 00:04:57.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192715 s, 54.4 MB/s 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.897 17:50:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.159 17:50:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.159 17:50:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.159 17:50:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.159 17:50:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.159 17:50:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.159 17:50:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.159 17:50:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.159 17:50:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.159 17:50:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.159 17:50:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.420 17:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.681 17:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.681 17:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.681 17:50:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.681 17:50:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.681 17:50:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.681 17:50:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.681 17:50:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.681 17:50:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:58.943 [2024-11-20 17:50:17.256704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.943 [2024-11-20 17:50:17.293447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.943 [2024-11-20 17:50:17.293448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.943 [2024-11-20 17:50:17.325263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:58.943 [2024-11-20 17:50:17.325305] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.244 17:50:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3334043 /var/tmp/spdk-nbd.sock 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3334043 ']' 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.244 17:50:20 event.app_repeat -- event/event.sh@39 -- # killprocess 3334043 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3334043 ']' 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3334043 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3334043 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3334043' 00:05:02.244 killing process with pid 3334043 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3334043 00:05:02.244 17:50:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3334043 00:05:02.244 spdk_app_start is called in Round 0. 00:05:02.244 Shutdown signal received, stop current app iteration 00:05:02.244 Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 reinitialization... 00:05:02.244 spdk_app_start is called in Round 1. 00:05:02.244 Shutdown signal received, stop current app iteration 00:05:02.244 Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 reinitialization... 00:05:02.244 spdk_app_start is called in Round 2. 00:05:02.244 Shutdown signal received, stop current app iteration 00:05:02.245 Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 reinitialization... 00:05:02.245 spdk_app_start is called in Round 3. 00:05:02.245 Shutdown signal received, stop current app iteration 00:05:02.245 17:50:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:02.245 17:50:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:02.245 00:05:02.245 real 0m15.564s 00:05:02.245 user 0m33.864s 00:05:02.245 sys 0m2.241s 00:05:02.245 17:50:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.245 17:50:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.245 ************************************ 00:05:02.245 END TEST app_repeat 00:05:02.245 ************************************ 00:05:02.245 17:50:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:02.245 17:50:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:02.245 17:50:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.245 17:50:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.245 17:50:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.245 ************************************ 00:05:02.245 START TEST cpu_locks 00:05:02.245 ************************************ 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:02.245 * Looking for test storage... 00:05:02.245 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.245 17:50:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.245 --rc genhtml_branch_coverage=1 00:05:02.245 --rc genhtml_function_coverage=1 00:05:02.245 --rc genhtml_legend=1 00:05:02.245 --rc geninfo_all_blocks=1 00:05:02.245 --rc geninfo_unexecuted_blocks=1 00:05:02.245 00:05:02.245 ' 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.245 --rc genhtml_branch_coverage=1 00:05:02.245 --rc genhtml_function_coverage=1 00:05:02.245 --rc genhtml_legend=1 00:05:02.245 --rc geninfo_all_blocks=1 00:05:02.245 --rc geninfo_unexecuted_blocks=1 00:05:02.245 00:05:02.245 ' 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.245 --rc genhtml_branch_coverage=1 00:05:02.245 --rc genhtml_function_coverage=1 00:05:02.245 --rc genhtml_legend=1 00:05:02.245 --rc geninfo_all_blocks=1 00:05:02.245 --rc geninfo_unexecuted_blocks=1 00:05:02.245 00:05:02.245 ' 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.245 --rc genhtml_branch_coverage=1 00:05:02.245 --rc genhtml_function_coverage=1 00:05:02.245 --rc genhtml_legend=1 00:05:02.245 --rc geninfo_all_blocks=1 00:05:02.245 --rc geninfo_unexecuted_blocks=1 00:05:02.245 00:05:02.245 ' 00:05:02.245 17:50:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:02.245 17:50:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:02.245 17:50:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:02.245 17:50:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.245 17:50:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.505 ************************************ 00:05:02.505 START TEST default_locks 00:05:02.505 ************************************ 00:05:02.505 17:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:02.505 17:50:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3337434 00:05:02.505 17:50:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3337434 00:05:02.505 17:50:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.505 17:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3337434 ']' 00:05:02.505 17:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.505 17:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.505 17:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.505 17:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.505 17:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.505 [2024-11-20 17:50:20.847185] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:02.505 [2024-11-20 17:50:20.847253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337434 ] 00:05:02.506 [2024-11-20 17:50:20.932350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.506 [2024-11-20 17:50:20.974182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.447 17:50:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.447 17:50:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:03.447 17:50:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3337434 00:05:03.447 17:50:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3337434 00:05:03.447 17:50:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.707 lslocks: write error 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3337434 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3337434 ']' 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3337434 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3337434 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3337434' 00:05:03.707 killing process with pid 3337434 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3337434 00:05:03.707 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3337434 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3337434 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3337434 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3337434 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3337434 ']' 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.969 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3337434) - No such process 00:05:03.969 ERROR: process (pid: 3337434) is no longer running 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.969 00:05:03.969 real 0m1.593s 00:05:03.969 user 0m1.690s 00:05:03.969 sys 0m0.575s 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.969 17:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.969 ************************************ 00:05:03.969 END TEST default_locks 00:05:03.969 ************************************ 00:05:03.969 17:50:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:03.969 17:50:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.969 17:50:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.969 17:50:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.969 ************************************ 00:05:03.969 START TEST default_locks_via_rpc 00:05:03.969 ************************************ 00:05:03.969 17:50:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:03.969 17:50:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3337782 00:05:03.969 17:50:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3337782 00:05:03.969 17:50:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.969 17:50:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3337782 ']' 00:05:03.969 17:50:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.969 17:50:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.969 17:50:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.969 17:50:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.969 17:50:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.231 [2024-11-20 17:50:22.517759] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:04.231 [2024-11-20 17:50:22.517817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337782 ] 00:05:04.231 [2024-11-20 17:50:22.602334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.231 [2024-11-20 17:50:22.642030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.803 17:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3337782 00:05:05.064 17:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3337782 00:05:05.064 17:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.328 17:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3337782 00:05:05.329 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3337782 ']' 00:05:05.329 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3337782 00:05:05.329 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:05.329 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.330 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3337782 00:05:05.592 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.592 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.592 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3337782' 00:05:05.592 killing process with pid 3337782 00:05:05.592 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3337782 00:05:05.592 17:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3337782 00:05:05.592 00:05:05.592 real 0m1.625s 00:05:05.592 user 0m1.752s 00:05:05.592 sys 0m0.563s 00:05:05.592 17:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.592 17:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.592 ************************************ 00:05:05.592 END TEST default_locks_via_rpc 00:05:05.592 ************************************ 00:05:05.592 17:50:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:05.593 17:50:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.593 17:50:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.593 17:50:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.854 ************************************ 00:05:05.854 START TEST non_locking_app_on_locked_coremask 00:05:05.854 ************************************ 00:05:05.854 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:05.854 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3338146 00:05:05.854 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3338146 /var/tmp/spdk.sock 00:05:05.854 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.854 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3338146 ']' 00:05:05.854 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.854 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.854 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.854 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.854 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.854 [2024-11-20 17:50:24.203897] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:05.854 [2024-11-20 17:50:24.203949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338146 ] 00:05:05.854 [2024-11-20 17:50:24.281906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.854 [2024-11-20 17:50:24.318680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3338370 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3338370 /var/tmp/spdk2.sock 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3338370 ']' 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.796 17:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.796 [2024-11-20 17:50:25.020455] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:06.796 [2024-11-20 17:50:25.020507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338370 ] 00:05:06.796 [2024-11-20 17:50:25.140339] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.796 [2024-11-20 17:50:25.140366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.796 [2024-11-20 17:50:25.212656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.369 17:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.369 17:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.369 17:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3338146 00:05:07.369 17:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3338146 00:05:07.369 17:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.310 lslocks: write error 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3338146 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3338146 ']' 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3338146 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3338146 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3338146' 00:05:08.310 killing process with pid 3338146 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3338146 00:05:08.310 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3338146 00:05:08.570 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3338370 00:05:08.570 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3338370 ']' 00:05:08.570 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3338370 00:05:08.570 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.570 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.570 17:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3338370 00:05:08.570 17:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.571 17:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.571 17:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3338370' 00:05:08.571 killing process with pid 3338370 00:05:08.571 17:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3338370 00:05:08.571 17:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3338370 00:05:08.832 00:05:08.832 real 0m3.104s 00:05:08.832 user 0m3.377s 00:05:08.832 sys 0m0.957s 00:05:08.832 17:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.832 17:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.832 ************************************ 00:05:08.832 END TEST non_locking_app_on_locked_coremask 00:05:08.832 ************************************ 00:05:08.832 17:50:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:08.832 17:50:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.832 17:50:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.832 17:50:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.832 ************************************ 00:05:08.832 START TEST locking_app_on_unlocked_coremask 00:05:08.832 ************************************ 00:05:08.832 17:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:08.832 17:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3338768 00:05:08.832 17:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3338768 /var/tmp/spdk.sock 00:05:08.832 17:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:08.832 17:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3338768 ']' 00:05:08.832 17:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.832 17:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.832 17:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.832 17:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.832 17:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.093 [2024-11-20 17:50:27.379347] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:09.093 [2024-11-20 17:50:27.379403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338768 ] 00:05:09.093 [2024-11-20 17:50:27.458154] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:09.093 [2024-11-20 17:50:27.458183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.093 [2024-11-20 17:50:27.495721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.663 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.663 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.663 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3339081 00:05:09.663 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3339081 /var/tmp/spdk2.sock 00:05:09.663 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3339081 ']' 00:05:09.664 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:09.664 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.664 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.664 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.664 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.664 17:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.924 [2024-11-20 17:50:28.215510] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:09.924 [2024-11-20 17:50:28.215565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339081 ] 00:05:09.924 [2024-11-20 17:50:28.338700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.924 [2024-11-20 17:50:28.411198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.495 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.495 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.495 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3339081 00:05:10.495 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3339081 00:05:10.495 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.436 lslocks: write error 00:05:11.436 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3338768 00:05:11.436 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3338768 ']' 00:05:11.436 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3338768 00:05:11.437 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.437 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.437 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3338768 00:05:11.437 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.437 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.437 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3338768' 00:05:11.437 killing process with pid 3338768 00:05:11.437 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3338768 00:05:11.437 17:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3338768 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3339081 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3339081 ']' 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3339081 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3339081 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3339081' 00:05:11.697 killing process with pid 3339081 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3339081 00:05:11.697 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3339081 00:05:11.958 00:05:11.958 real 0m3.041s 00:05:11.958 user 0m3.358s 00:05:11.958 sys 0m0.919s 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.958 ************************************ 00:05:11.958 END TEST locking_app_on_unlocked_coremask 00:05:11.958 ************************************ 00:05:11.958 17:50:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:11.958 17:50:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.958 17:50:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.958 17:50:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.958 ************************************ 00:05:11.958 START TEST locking_app_on_locked_coremask 00:05:11.958 ************************************ 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3339480 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3339480 /var/tmp/spdk.sock 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3339480 ']' 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.958 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.958 [2024-11-20 17:50:30.478067] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:11.958 [2024-11-20 17:50:30.478115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339480 ] 00:05:12.218 [2024-11-20 17:50:30.553801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.218 [2024-11-20 17:50:30.589887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.477 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.477 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:12.477 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3339547 00:05:12.477 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3339547 /var/tmp/spdk2.sock 00:05:12.477 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:12.477 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:12.477 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3339547 /var/tmp/spdk2.sock 00:05:12.477 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:12.478 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.478 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:12.478 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.478 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3339547 /var/tmp/spdk2.sock 00:05:12.478 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3339547 ']' 00:05:12.478 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.478 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.478 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.478 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.478 17:50:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.478 [2024-11-20 17:50:30.840399] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:12.478 [2024-11-20 17:50:30.840451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339547 ] 00:05:12.478 [2024-11-20 17:50:30.960044] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3339480 has claimed it. 00:05:12.478 [2024-11-20 17:50:30.960085] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:13.047 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3339547) - No such process 00:05:13.047 ERROR: process (pid: 3339547) is no longer running 00:05:13.047 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.047 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:13.047 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:13.047 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.047 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.047 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.047 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3339480 00:05:13.047 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3339480 00:05:13.047 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.616 lslocks: write error 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3339480 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3339480 ']' 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3339480 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3339480 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3339480' 00:05:13.616 killing process with pid 3339480 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3339480 00:05:13.616 17:50:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3339480 00:05:13.875 00:05:13.875 real 0m1.723s 00:05:13.875 user 0m1.880s 00:05:13.875 sys 0m0.593s 00:05:13.875 17:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.875 17:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.875 ************************************ 00:05:13.876 END TEST locking_app_on_locked_coremask 00:05:13.876 ************************************ 00:05:13.876 17:50:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:13.876 17:50:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.876 17:50:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.876 17:50:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.876 ************************************ 00:05:13.876 START TEST locking_overlapped_coremask 00:05:13.876 ************************************ 00:05:13.876 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:13.876 17:50:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:13.876 17:50:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3339844 00:05:13.876 17:50:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3339844 /var/tmp/spdk.sock 00:05:13.876 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3339844 ']' 00:05:13.876 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.876 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.876 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.876 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.876 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.876 [2024-11-20 17:50:32.270634] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:13.876 [2024-11-20 17:50:32.270680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339844 ] 00:05:13.876 [2024-11-20 17:50:32.347882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.876 [2024-11-20 17:50:32.386777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.876 [2024-11-20 17:50:32.386895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.876 [2024-11-20 17:50:32.386898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3339978 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3339978 /var/tmp/spdk2.sock 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3339978 /var/tmp/spdk2.sock 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3339978 /var/tmp/spdk2.sock 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3339978 ']' 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.136 17:50:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.136 [2024-11-20 17:50:32.632621] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:14.136 [2024-11-20 17:50:32.632673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339978 ] 00:05:14.396 [2024-11-20 17:50:32.724815] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3339844 has claimed it. 00:05:14.396 [2024-11-20 17:50:32.724845] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:14.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3339978) - No such process 00:05:14.966 ERROR: process (pid: 3339978) is no longer running 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3339844 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3339844 ']' 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3339844 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3339844 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3339844' 00:05:14.966 killing process with pid 3339844 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3339844 00:05:14.966 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3339844 00:05:15.226 00:05:15.226 real 0m1.295s 00:05:15.226 user 0m3.614s 00:05:15.226 sys 0m0.350s 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.226 ************************************ 00:05:15.226 END TEST locking_overlapped_coremask 00:05:15.226 ************************************ 00:05:15.226 17:50:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:15.226 17:50:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.226 17:50:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.226 17:50:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.226 ************************************ 00:05:15.226 START TEST locking_overlapped_coremask_via_rpc 00:05:15.226 ************************************ 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3340215 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3340215 /var/tmp/spdk.sock 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3340215 ']' 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.226 17:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.226 [2024-11-20 17:50:33.666077] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:15.226 [2024-11-20 17:50:33.666128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340215 ] 00:05:15.226 [2024-11-20 17:50:33.744372] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.226 [2024-11-20 17:50:33.744400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.486 [2024-11-20 17:50:33.783956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.486 [2024-11-20 17:50:33.784212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.486 [2024-11-20 17:50:33.784216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3340438 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3340438 /var/tmp/spdk2.sock 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3340438 ']' 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.057 17:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.057 [2024-11-20 17:50:34.513463] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:16.057 [2024-11-20 17:50:34.513516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340438 ] 00:05:16.318 [2024-11-20 17:50:34.611686] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.318 [2024-11-20 17:50:34.611711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.318 [2024-11-20 17:50:34.670900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.318 [2024-11-20 17:50:34.673984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.318 [2024-11-20 17:50:34.673987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:16.888 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.889 [2024-11-20 17:50:35.318926] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3340215 has claimed it. 00:05:16.889 request: 00:05:16.889 { 00:05:16.889 "method": "framework_enable_cpumask_locks", 00:05:16.889 "req_id": 1 00:05:16.889 } 00:05:16.889 Got JSON-RPC error response 00:05:16.889 response: 00:05:16.889 { 00:05:16.889 "code": -32603, 00:05:16.889 "message": "Failed to claim CPU core: 2" 00:05:16.889 } 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3340215 /var/tmp/spdk.sock 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3340215 ']' 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.889 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.149 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.149 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.149 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3340438 /var/tmp/spdk2.sock 00:05:17.149 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3340438 ']' 00:05:17.150 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.150 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.150 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.150 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.150 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.410 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.410 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:17.410 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:17.410 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:17.410 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:17.410 00:05:17.410 real 0m2.089s 00:05:17.410 user 0m0.880s 00:05:17.410 sys 0m0.134s 00:05:17.410 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.411 17:50:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.411 ************************************ 00:05:17.411 END TEST locking_overlapped_coremask_via_rpc 00:05:17.411 ************************************ 00:05:17.411 17:50:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:17.411 17:50:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3340215 ]] 00:05:17.411 17:50:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3340215 00:05:17.411 17:50:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3340215 ']' 00:05:17.411 17:50:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3340215 00:05:17.411 17:50:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:17.411 17:50:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.411 17:50:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3340215 00:05:17.411 17:50:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.411 17:50:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.411 17:50:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3340215' 00:05:17.411 killing process with pid 3340215 00:05:17.411 17:50:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3340215 00:05:17.411 17:50:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3340215 00:05:17.671 17:50:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3340438 ]] 00:05:17.672 17:50:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3340438 00:05:17.672 17:50:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3340438 ']' 00:05:17.672 17:50:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3340438 00:05:17.672 17:50:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:17.672 17:50:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.672 17:50:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3340438 00:05:17.672 17:50:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:17.672 17:50:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:17.672 17:50:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3340438' 00:05:17.672 killing process with pid 3340438 00:05:17.672 17:50:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3340438 00:05:17.672 17:50:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3340438 00:05:17.933 17:50:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.933 17:50:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:17.933 17:50:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3340215 ]] 00:05:17.933 17:50:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3340215 00:05:17.933 17:50:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3340215 ']' 00:05:17.933 17:50:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3340215 00:05:17.933 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3340215) - No such process 00:05:17.933 17:50:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3340215 is not found' 00:05:17.933 Process with pid 3340215 is not found 00:05:17.933 17:50:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3340438 ]] 00:05:17.933 17:50:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3340438 00:05:17.933 17:50:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3340438 ']' 00:05:17.933 17:50:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3340438 00:05:17.933 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3340438) - No such process 00:05:17.933 17:50:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3340438 is not found' 00:05:17.933 Process with pid 3340438 is not found 00:05:17.933 17:50:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.933 00:05:17.933 real 0m15.728s 00:05:17.933 user 0m26.722s 00:05:17.933 sys 0m5.038s 00:05:17.933 17:50:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.933 17:50:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.933 ************************************ 00:05:17.933 END TEST cpu_locks 00:05:17.933 ************************************ 00:05:17.933 00:05:17.933 real 0m41.272s 00:05:17.933 user 1m20.175s 00:05:17.933 sys 0m8.281s 00:05:17.933 17:50:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.933 17:50:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.933 ************************************ 00:05:17.933 END TEST event 00:05:17.933 ************************************ 00:05:17.933 17:50:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:17.933 17:50:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.933 17:50:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.933 17:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:17.933 ************************************ 00:05:17.933 START TEST thread 00:05:17.933 ************************************ 00:05:17.933 17:50:36 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:18.193 * Looking for test storage... 00:05:18.193 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:05:18.193 17:50:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.193 17:50:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.193 17:50:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.193 17:50:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.193 17:50:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.193 17:50:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.193 17:50:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.193 17:50:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.193 17:50:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.193 17:50:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.193 17:50:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.193 17:50:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.193 17:50:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.193 17:50:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.193 17:50:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.193 17:50:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:18.193 17:50:36 thread -- scripts/common.sh@345 -- # : 1 00:05:18.193 17:50:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.193 17:50:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.193 17:50:36 thread -- scripts/common.sh@365 -- # decimal 1 00:05:18.193 17:50:36 thread -- scripts/common.sh@353 -- # local d=1 00:05:18.193 17:50:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.193 17:50:36 thread -- scripts/common.sh@355 -- # echo 1 00:05:18.193 17:50:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.193 17:50:36 thread -- scripts/common.sh@366 -- # decimal 2 00:05:18.193 17:50:36 thread -- scripts/common.sh@353 -- # local d=2 00:05:18.193 17:50:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.193 17:50:36 thread -- scripts/common.sh@355 -- # echo 2 00:05:18.193 17:50:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.193 17:50:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.193 17:50:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.193 17:50:36 thread -- scripts/common.sh@368 -- # return 0 00:05:18.193 17:50:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.193 17:50:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.193 --rc genhtml_branch_coverage=1 00:05:18.193 --rc genhtml_function_coverage=1 00:05:18.193 --rc genhtml_legend=1 00:05:18.193 --rc geninfo_all_blocks=1 00:05:18.193 --rc geninfo_unexecuted_blocks=1 00:05:18.193 00:05:18.193 ' 00:05:18.193 17:50:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.193 --rc genhtml_branch_coverage=1 00:05:18.193 --rc genhtml_function_coverage=1 00:05:18.193 --rc genhtml_legend=1 00:05:18.193 --rc geninfo_all_blocks=1 00:05:18.193 --rc geninfo_unexecuted_blocks=1 00:05:18.193 00:05:18.193 ' 00:05:18.193 17:50:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.193 --rc genhtml_branch_coverage=1 00:05:18.193 --rc genhtml_function_coverage=1 00:05:18.193 --rc genhtml_legend=1 00:05:18.193 --rc geninfo_all_blocks=1 00:05:18.194 --rc geninfo_unexecuted_blocks=1 00:05:18.194 00:05:18.194 ' 00:05:18.194 17:50:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.194 --rc genhtml_branch_coverage=1 00:05:18.194 --rc genhtml_function_coverage=1 00:05:18.194 --rc genhtml_legend=1 00:05:18.194 --rc geninfo_all_blocks=1 00:05:18.194 --rc geninfo_unexecuted_blocks=1 00:05:18.194 00:05:18.194 ' 00:05:18.194 17:50:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:18.194 17:50:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:18.194 17:50:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.194 17:50:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.194 ************************************ 00:05:18.194 START TEST thread_poller_perf 00:05:18.194 ************************************ 00:05:18.194 17:50:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:18.194 [2024-11-20 17:50:36.637540] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:18.194 [2024-11-20 17:50:36.637636] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340998 ] 00:05:18.194 [2024-11-20 17:50:36.720482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.454 [2024-11-20 17:50:36.756253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.454 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:19.395 [2024-11-20T16:50:37.931Z] ====================================== 00:05:19.395 [2024-11-20T16:50:37.931Z] busy:2411499134 (cyc) 00:05:19.395 [2024-11-20T16:50:37.931Z] total_run_count: 288000 00:05:19.395 [2024-11-20T16:50:37.931Z] tsc_hz: 2400000000 (cyc) 00:05:19.395 [2024-11-20T16:50:37.931Z] ====================================== 00:05:19.395 [2024-11-20T16:50:37.931Z] poller_cost: 8373 (cyc), 3488 (nsec) 00:05:19.395 00:05:19.395 real 0m1.180s 00:05:19.395 user 0m1.110s 00:05:19.395 sys 0m0.066s 00:05:19.395 17:50:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.395 17:50:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.395 ************************************ 00:05:19.395 END TEST thread_poller_perf 00:05:19.395 ************************************ 00:05:19.395 17:50:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:19.395 17:50:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:19.395 17:50:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.395 17:50:37 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.395 ************************************ 00:05:19.395 START TEST thread_poller_perf 00:05:19.395 ************************************ 00:05:19.395 17:50:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:19.395 [2024-11-20 17:50:37.896915] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:19.395 [2024-11-20 17:50:37.897013] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341264 ] 00:05:19.656 [2024-11-20 17:50:37.988075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.656 [2024-11-20 17:50:38.025575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.656 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:20.597 [2024-11-20T16:50:39.133Z] ====================================== 00:05:20.597 [2024-11-20T16:50:39.133Z] busy:2402261574 (cyc) 00:05:20.597 [2024-11-20T16:50:39.133Z] total_run_count: 3816000 00:05:20.597 [2024-11-20T16:50:39.133Z] tsc_hz: 2400000000 (cyc) 00:05:20.597 [2024-11-20T16:50:39.133Z] ====================================== 00:05:20.597 [2024-11-20T16:50:39.133Z] poller_cost: 629 (cyc), 262 (nsec) 00:05:20.597 00:05:20.597 real 0m1.183s 00:05:20.597 user 0m1.105s 00:05:20.597 sys 0m0.074s 00:05:20.597 17:50:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.597 17:50:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.597 ************************************ 00:05:20.597 END TEST thread_poller_perf 00:05:20.597 ************************************ 00:05:20.597 17:50:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:20.597 00:05:20.597 real 0m2.710s 00:05:20.597 user 0m2.395s 00:05:20.597 sys 0m0.327s 00:05:20.597 17:50:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.597 17:50:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.597 ************************************ 00:05:20.597 END TEST thread 00:05:20.597 ************************************ 00:05:20.858 17:50:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:20.858 17:50:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:20.858 17:50:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.858 17:50:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.858 17:50:39 -- common/autotest_common.sh@10 -- # set +x 00:05:20.858 ************************************ 00:05:20.858 START TEST app_cmdline 00:05:20.858 ************************************ 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:20.858 * Looking for test storage... 00:05:20.858 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.858 17:50:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.858 --rc genhtml_branch_coverage=1 00:05:20.858 --rc genhtml_function_coverage=1 00:05:20.858 --rc genhtml_legend=1 00:05:20.858 --rc geninfo_all_blocks=1 00:05:20.858 --rc geninfo_unexecuted_blocks=1 00:05:20.858 00:05:20.858 ' 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.858 --rc genhtml_branch_coverage=1 00:05:20.858 --rc genhtml_function_coverage=1 00:05:20.858 --rc genhtml_legend=1 00:05:20.858 --rc geninfo_all_blocks=1 00:05:20.858 --rc geninfo_unexecuted_blocks=1 00:05:20.858 00:05:20.858 ' 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.858 --rc genhtml_branch_coverage=1 00:05:20.858 --rc genhtml_function_coverage=1 00:05:20.858 --rc genhtml_legend=1 00:05:20.858 --rc geninfo_all_blocks=1 00:05:20.858 --rc geninfo_unexecuted_blocks=1 00:05:20.858 00:05:20.858 ' 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.858 --rc genhtml_branch_coverage=1 00:05:20.858 --rc genhtml_function_coverage=1 00:05:20.858 --rc genhtml_legend=1 00:05:20.858 --rc geninfo_all_blocks=1 00:05:20.858 --rc geninfo_unexecuted_blocks=1 00:05:20.858 00:05:20.858 ' 00:05:20.858 17:50:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:20.858 17:50:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3341561 00:05:20.858 17:50:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3341561 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3341561 ']' 00:05:20.858 17:50:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.858 17:50:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:21.120 [2024-11-20 17:50:39.431828] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:21.120 [2024-11-20 17:50:39.431897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341561 ] 00:05:21.120 [2024-11-20 17:50:39.508271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.120 [2024-11-20 17:50:39.544754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.690 17:50:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.690 17:50:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:21.690 17:50:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:21.951 { 00:05:21.951 "version": "SPDK v25.01-pre git sha1 2e015e34f", 00:05:21.951 "fields": { 00:05:21.951 "major": 25, 00:05:21.951 "minor": 1, 00:05:21.951 "patch": 0, 00:05:21.951 "suffix": "-pre", 00:05:21.951 "commit": "2e015e34f" 00:05:21.951 } 00:05:21.951 } 00:05:21.951 17:50:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:21.951 17:50:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:21.951 17:50:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:21.951 17:50:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:21.951 17:50:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:21.951 17:50:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.951 17:50:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.951 17:50:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:21.951 17:50:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:21.951 17:50:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:05:21.951 17:50:40 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.212 request: 00:05:22.212 { 00:05:22.212 "method": "env_dpdk_get_mem_stats", 00:05:22.212 "req_id": 1 00:05:22.212 } 00:05:22.212 Got JSON-RPC error response 00:05:22.212 response: 00:05:22.212 { 00:05:22.212 "code": -32601, 00:05:22.212 "message": "Method not found" 00:05:22.212 } 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.212 17:50:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3341561 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3341561 ']' 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3341561 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3341561 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3341561' 00:05:22.212 killing process with pid 3341561 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 3341561 00:05:22.212 17:50:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 3341561 00:05:22.473 00:05:22.473 real 0m1.720s 00:05:22.473 user 0m2.061s 00:05:22.473 sys 0m0.441s 00:05:22.473 17:50:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.473 17:50:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:22.473 ************************************ 00:05:22.473 END TEST app_cmdline 00:05:22.473 ************************************ 00:05:22.473 17:50:40 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:22.473 17:50:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.473 17:50:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.473 17:50:40 -- common/autotest_common.sh@10 -- # set +x 00:05:22.473 ************************************ 00:05:22.473 START TEST version 00:05:22.473 ************************************ 00:05:22.473 17:50:40 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:22.736 * Looking for test storage... 00:05:22.736 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:22.736 17:50:41 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.736 17:50:41 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.736 17:50:41 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.736 17:50:41 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.736 17:50:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.736 17:50:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.736 17:50:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.736 17:50:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.736 17:50:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.736 17:50:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.736 17:50:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.736 17:50:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.736 17:50:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.736 17:50:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.736 17:50:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.736 17:50:41 version -- scripts/common.sh@344 -- # case "$op" in 00:05:22.736 17:50:41 version -- scripts/common.sh@345 -- # : 1 00:05:22.736 17:50:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.736 17:50:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.736 17:50:41 version -- scripts/common.sh@365 -- # decimal 1 00:05:22.736 17:50:41 version -- scripts/common.sh@353 -- # local d=1 00:05:22.736 17:50:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.736 17:50:41 version -- scripts/common.sh@355 -- # echo 1 00:05:22.736 17:50:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.736 17:50:41 version -- scripts/common.sh@366 -- # decimal 2 00:05:22.736 17:50:41 version -- scripts/common.sh@353 -- # local d=2 00:05:22.736 17:50:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.736 17:50:41 version -- scripts/common.sh@355 -- # echo 2 00:05:22.736 17:50:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.736 17:50:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.736 17:50:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.736 17:50:41 version -- scripts/common.sh@368 -- # return 0 00:05:22.736 17:50:41 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.736 17:50:41 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.736 --rc genhtml_branch_coverage=1 00:05:22.736 --rc genhtml_function_coverage=1 00:05:22.736 --rc genhtml_legend=1 00:05:22.736 --rc geninfo_all_blocks=1 00:05:22.736 --rc geninfo_unexecuted_blocks=1 00:05:22.736 00:05:22.736 ' 00:05:22.736 17:50:41 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.736 --rc genhtml_branch_coverage=1 00:05:22.736 --rc genhtml_function_coverage=1 00:05:22.736 --rc genhtml_legend=1 00:05:22.736 --rc geninfo_all_blocks=1 00:05:22.736 --rc geninfo_unexecuted_blocks=1 00:05:22.736 00:05:22.736 ' 00:05:22.736 17:50:41 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.736 --rc genhtml_branch_coverage=1 00:05:22.736 --rc genhtml_function_coverage=1 00:05:22.736 --rc genhtml_legend=1 00:05:22.736 --rc geninfo_all_blocks=1 00:05:22.736 --rc geninfo_unexecuted_blocks=1 00:05:22.736 00:05:22.736 ' 00:05:22.736 17:50:41 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.736 --rc genhtml_branch_coverage=1 00:05:22.736 --rc genhtml_function_coverage=1 00:05:22.736 --rc genhtml_legend=1 00:05:22.736 --rc geninfo_all_blocks=1 00:05:22.736 --rc geninfo_unexecuted_blocks=1 00:05:22.736 00:05:22.736 ' 00:05:22.736 17:50:41 version -- app/version.sh@17 -- # get_header_version major 00:05:22.736 17:50:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:22.736 17:50:41 version -- app/version.sh@14 -- # cut -f2 00:05:22.736 17:50:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:22.736 17:50:41 version -- app/version.sh@17 -- # major=25 00:05:22.736 17:50:41 version -- app/version.sh@18 -- # get_header_version minor 00:05:22.736 17:50:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:22.736 17:50:41 version -- app/version.sh@14 -- # cut -f2 00:05:22.736 17:50:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:22.736 17:50:41 version -- app/version.sh@18 -- # minor=1 00:05:22.736 17:50:41 version -- app/version.sh@19 -- # get_header_version patch 00:05:22.736 17:50:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:22.736 17:50:41 version -- app/version.sh@14 -- # cut -f2 00:05:22.736 17:50:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:22.736 17:50:41 version -- app/version.sh@19 -- # patch=0 00:05:22.736 17:50:41 version -- app/version.sh@20 -- # get_header_version suffix 00:05:22.736 17:50:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:22.736 17:50:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:22.736 17:50:41 version -- app/version.sh@14 -- # cut -f2 00:05:22.736 17:50:41 version -- app/version.sh@20 -- # suffix=-pre 00:05:22.736 17:50:41 version -- app/version.sh@22 -- # version=25.1 00:05:22.737 17:50:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:22.737 17:50:41 version -- app/version.sh@28 -- # version=25.1rc0 00:05:22.737 17:50:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:22.737 17:50:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:22.737 17:50:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:22.737 17:50:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:22.737 00:05:22.737 real 0m0.265s 00:05:22.737 user 0m0.165s 00:05:22.737 sys 0m0.148s 00:05:22.737 17:50:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.737 17:50:41 version -- common/autotest_common.sh@10 -- # set +x 00:05:22.737 ************************************ 00:05:22.737 END TEST version 00:05:22.737 ************************************ 00:05:23.000 17:50:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:23.000 17:50:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:23.000 17:50:41 -- spdk/autotest.sh@194 -- # uname -s 00:05:23.000 17:50:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:23.000 17:50:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.000 17:50:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.000 17:50:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:23.000 17:50:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:23.000 17:50:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:23.000 17:50:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.000 17:50:41 -- common/autotest_common.sh@10 -- # set +x 00:05:23.000 17:50:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:23.000 17:50:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:23.000 17:50:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:23.000 17:50:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:23.000 17:50:41 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:05:23.000 17:50:41 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:23.000 17:50:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.000 17:50:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.000 17:50:41 -- common/autotest_common.sh@10 -- # set +x 00:05:23.000 ************************************ 00:05:23.000 START TEST nvmf_rdma 00:05:23.000 ************************************ 00:05:23.000 17:50:41 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:23.000 * Looking for test storage... 00:05:23.000 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:23.000 17:50:41 nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.000 17:50:41 nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.000 17:50:41 nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.262 17:50:41 nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.262 17:50:41 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:05:23.262 17:50:41 nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.262 17:50:41 nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.262 --rc genhtml_branch_coverage=1 00:05:23.262 --rc genhtml_function_coverage=1 00:05:23.262 --rc genhtml_legend=1 00:05:23.262 --rc geninfo_all_blocks=1 00:05:23.262 --rc geninfo_unexecuted_blocks=1 00:05:23.262 00:05:23.262 ' 00:05:23.262 17:50:41 nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.263 --rc genhtml_branch_coverage=1 00:05:23.263 --rc genhtml_function_coverage=1 00:05:23.263 --rc genhtml_legend=1 00:05:23.263 --rc geninfo_all_blocks=1 00:05:23.263 --rc geninfo_unexecuted_blocks=1 00:05:23.263 00:05:23.263 ' 00:05:23.263 17:50:41 nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.263 --rc genhtml_branch_coverage=1 00:05:23.263 --rc genhtml_function_coverage=1 00:05:23.263 --rc genhtml_legend=1 00:05:23.263 --rc geninfo_all_blocks=1 00:05:23.263 --rc geninfo_unexecuted_blocks=1 00:05:23.263 00:05:23.263 ' 00:05:23.263 17:50:41 nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.263 --rc genhtml_branch_coverage=1 00:05:23.263 --rc genhtml_function_coverage=1 00:05:23.263 --rc genhtml_legend=1 00:05:23.263 --rc geninfo_all_blocks=1 00:05:23.263 --rc geninfo_unexecuted_blocks=1 00:05:23.263 00:05:23.263 ' 00:05:23.263 17:50:41 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:05:23.263 17:50:41 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:23.263 17:50:41 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:23.263 17:50:41 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.263 17:50:41 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.263 17:50:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:23.263 ************************************ 00:05:23.263 START TEST nvmf_target_core 00:05:23.263 ************************************ 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:23.263 * Looking for test storage... 00:05:23.263 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.263 --rc genhtml_branch_coverage=1 00:05:23.263 --rc genhtml_function_coverage=1 00:05:23.263 --rc genhtml_legend=1 00:05:23.263 --rc geninfo_all_blocks=1 00:05:23.263 --rc geninfo_unexecuted_blocks=1 00:05:23.263 00:05:23.263 ' 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.263 --rc genhtml_branch_coverage=1 00:05:23.263 --rc genhtml_function_coverage=1 00:05:23.263 --rc genhtml_legend=1 00:05:23.263 --rc geninfo_all_blocks=1 00:05:23.263 --rc geninfo_unexecuted_blocks=1 00:05:23.263 00:05:23.263 ' 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.263 --rc genhtml_branch_coverage=1 00:05:23.263 --rc genhtml_function_coverage=1 00:05:23.263 --rc genhtml_legend=1 00:05:23.263 --rc geninfo_all_blocks=1 00:05:23.263 --rc geninfo_unexecuted_blocks=1 00:05:23.263 00:05:23.263 ' 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.263 --rc genhtml_branch_coverage=1 00:05:23.263 --rc genhtml_function_coverage=1 00:05:23.263 --rc genhtml_legend=1 00:05:23.263 --rc geninfo_all_blocks=1 00:05:23.263 --rc geninfo_unexecuted_blocks=1 00:05:23.263 00:05:23.263 ' 00:05:23.263 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.526 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:23.526 ************************************ 00:05:23.526 START TEST nvmf_abort 00:05:23.526 ************************************ 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:23.526 * Looking for test storage... 00:05:23.526 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.526 17:50:41 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.526 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.526 --rc genhtml_branch_coverage=1 00:05:23.526 --rc genhtml_function_coverage=1 00:05:23.526 --rc genhtml_legend=1 00:05:23.526 --rc geninfo_all_blocks=1 00:05:23.526 --rc geninfo_unexecuted_blocks=1 00:05:23.526 00:05:23.526 ' 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.527 --rc genhtml_branch_coverage=1 00:05:23.527 --rc genhtml_function_coverage=1 00:05:23.527 --rc genhtml_legend=1 00:05:23.527 --rc geninfo_all_blocks=1 00:05:23.527 --rc geninfo_unexecuted_blocks=1 00:05:23.527 00:05:23.527 ' 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.527 --rc genhtml_branch_coverage=1 00:05:23.527 --rc genhtml_function_coverage=1 00:05:23.527 --rc genhtml_legend=1 00:05:23.527 --rc geninfo_all_blocks=1 00:05:23.527 --rc geninfo_unexecuted_blocks=1 00:05:23.527 00:05:23.527 ' 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.527 --rc genhtml_branch_coverage=1 00:05:23.527 --rc genhtml_function_coverage=1 00:05:23.527 --rc genhtml_legend=1 00:05:23.527 --rc geninfo_all_blocks=1 00:05:23.527 --rc geninfo_unexecuted_blocks=1 00:05:23.527 00:05:23.527 ' 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.527 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.789 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:23.789 17:50:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:31.935 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:05:31.936 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:05:31.936 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:05:31.936 Found net devices under 0000:98:00.0: mlx_0_0 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:05:31.936 Found net devices under 0000:98:00.1: mlx_0_1 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:31.936 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:31.937 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:31.937 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:05:31.937 altname enp152s0f0np0 00:05:31.937 altname ens817f0np0 00:05:31.937 inet 192.168.100.8/24 scope global mlx_0_0 00:05:31.937 valid_lft forever preferred_lft forever 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:31.937 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:31.937 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:05:31.937 altname enp152s0f1np1 00:05:31.937 altname ens817f1np1 00:05:31.937 inet 192.168.100.9/24 scope global mlx_0_1 00:05:31.937 valid_lft forever preferred_lft forever 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:31.937 192.168.100.9' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:31.937 192.168.100.9' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:31.937 192.168.100.9' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3346357 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3346357 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3346357 ']' 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.937 17:50:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.937 [2024-11-20 17:50:49.727838] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:31.937 [2024-11-20 17:50:49.727916] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:31.937 [2024-11-20 17:50:49.833328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.937 [2024-11-20 17:50:49.887699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:31.937 [2024-11-20 17:50:49.887757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:31.937 [2024-11-20 17:50:49.887767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:31.937 [2024-11-20 17:50:49.887774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:31.937 [2024-11-20 17:50:49.887781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:31.937 [2024-11-20 17:50:49.889919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.937 [2024-11-20 17:50:49.890107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.937 [2024-11-20 17:50:49.890108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.199 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.199 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:32.199 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:32.199 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.199 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.199 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:32.199 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:05:32.199 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.199 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.199 [2024-11-20 17:50:50.637782] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x239d650/0x23a1b40) succeed. 00:05:32.199 [2024-11-20 17:50:50.665363] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x239ec40/0x23e31e0) succeed. 00:05:32.459 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.459 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:32.459 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.459 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.459 Malloc0 00:05:32.459 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.459 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:32.459 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.459 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.459 Delay0 00:05:32.459 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.460 [2024-11-20 17:50:50.842929] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.460 17:50:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:32.460 [2024-11-20 17:50:50.965183] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:35.105 Initializing NVMe Controllers 00:05:35.105 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:05:35.105 controller IO queue size 128 less than required 00:05:35.105 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:35.105 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:35.105 Initialization complete. Launching workers. 00:05:35.105 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31166 00:05:35.105 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31227, failed to submit 62 00:05:35.105 success 31167, unsuccessful 60, failed 0 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:05:35.105 rmmod nvme_rdma 00:05:35.105 rmmod nvme_fabrics 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3346357 ']' 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3346357 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3346357 ']' 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3346357 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3346357 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3346357' 00:05:35.105 killing process with pid 3346357 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3346357 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3346357 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:05:35.105 00:05:35.105 real 0m11.529s 00:05:35.105 user 0m14.731s 00:05:35.105 sys 0m6.283s 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.105 ************************************ 00:05:35.105 END TEST nvmf_abort 00:05:35.105 ************************************ 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:35.105 ************************************ 00:05:35.105 START TEST nvmf_ns_hotplug_stress 00:05:35.105 ************************************ 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:35.105 * Looking for test storage... 00:05:35.105 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.105 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.366 --rc genhtml_branch_coverage=1 00:05:35.366 --rc genhtml_function_coverage=1 00:05:35.366 --rc genhtml_legend=1 00:05:35.366 --rc geninfo_all_blocks=1 00:05:35.366 --rc geninfo_unexecuted_blocks=1 00:05:35.366 00:05:35.366 ' 00:05:35.366 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.366 --rc genhtml_branch_coverage=1 00:05:35.366 --rc genhtml_function_coverage=1 00:05:35.366 --rc genhtml_legend=1 00:05:35.366 --rc geninfo_all_blocks=1 00:05:35.366 --rc geninfo_unexecuted_blocks=1 00:05:35.366 00:05:35.366 ' 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.367 --rc genhtml_branch_coverage=1 00:05:35.367 --rc genhtml_function_coverage=1 00:05:35.367 --rc genhtml_legend=1 00:05:35.367 --rc geninfo_all_blocks=1 00:05:35.367 --rc geninfo_unexecuted_blocks=1 00:05:35.367 00:05:35.367 ' 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.367 --rc genhtml_branch_coverage=1 00:05:35.367 --rc genhtml_function_coverage=1 00:05:35.367 --rc genhtml_legend=1 00:05:35.367 --rc geninfo_all_blocks=1 00:05:35.367 --rc geninfo_unexecuted_blocks=1 00:05:35.367 00:05:35.367 ' 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.367 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:35.367 17:50:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:05:43.507 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:43.507 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:05:43.508 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:05:43.508 Found net devices under 0000:98:00.0: mlx_0_0 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:05:43.508 Found net devices under 0000:98:00.1: mlx_0_1 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:43.508 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:43.508 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:05:43.508 altname enp152s0f0np0 00:05:43.508 altname ens817f0np0 00:05:43.508 inet 192.168.100.8/24 scope global mlx_0_0 00:05:43.508 valid_lft forever preferred_lft forever 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:43.508 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:43.508 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:05:43.508 altname enp152s0f1np1 00:05:43.508 altname ens817f1np1 00:05:43.508 inet 192.168.100.9/24 scope global mlx_0_1 00:05:43.508 valid_lft forever preferred_lft forever 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:43.508 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:43.509 17:51:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:43.509 192.168.100.9' 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:43.509 192.168.100.9' 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:43.509 192.168.100.9' 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:43.509 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3351650 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3351650 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3351650 ']' 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.770 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:43.770 [2024-11-20 17:51:02.118330] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:43.770 [2024-11-20 17:51:02.118404] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:43.770 [2024-11-20 17:51:02.218456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.770 [2024-11-20 17:51:02.270079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:43.770 [2024-11-20 17:51:02.270134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:43.770 [2024-11-20 17:51:02.270143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:43.770 [2024-11-20 17:51:02.270151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:43.770 [2024-11-20 17:51:02.270157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:43.770 [2024-11-20 17:51:02.272057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.770 [2024-11-20 17:51:02.272351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.770 [2024-11-20 17:51:02.272352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.711 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.711 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:44.711 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:44.711 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:44.711 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:44.711 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:44.711 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:44.711 17:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:05:44.711 [2024-11-20 17:51:03.157760] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa38650/0xa3cb40) succeed. 00:05:44.711 [2024-11-20 17:51:03.171850] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa39c40/0xa7e1e0) succeed. 00:05:44.972 17:51:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:44.972 17:51:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:45.232 [2024-11-20 17:51:03.633540] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:45.232 17:51:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:45.494 17:51:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:45.494 Malloc0 00:05:45.756 17:51:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:45.756 Delay0 00:05:45.756 17:51:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.017 17:51:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:46.017 NULL1 00:05:46.279 17:51:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:46.279 17:51:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:46.279 17:51:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3352155 00:05:46.279 17:51:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:46.279 17:51:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.540 Read completed with error (sct=0, sc=11) 00:05:46.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.540 17:51:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.801 17:51:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:46.801 17:51:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:46.801 true 00:05:46.801 17:51:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:46.801 17:51:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.742 17:51:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.003 17:51:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:48.003 17:51:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:48.003 true 00:05:48.003 17:51:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:48.003 17:51:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.945 17:51:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.206 17:51:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:49.206 17:51:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:49.206 true 00:05:49.206 17:51:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:49.206 17:51:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.148 17:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.409 17:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:50.409 17:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:50.409 true 00:05:50.409 17:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:50.409 17:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.351 17:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.611 17:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:51.611 17:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:51.611 true 00:05:51.611 17:51:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:51.611 17:51:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.554 17:51:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.554 17:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:52.554 17:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:52.815 true 00:05:52.815 17:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:52.815 17:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.758 17:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.020 17:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:54.020 17:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:54.020 true 00:05:54.020 17:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:54.020 17:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.964 17:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.225 17:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:55.225 17:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:55.225 true 00:05:55.225 17:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:55.225 17:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.168 17:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.429 17:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:56.429 17:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:56.429 true 00:05:56.429 17:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:56.429 17:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.370 17:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.631 17:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:57.631 17:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:57.631 true 00:05:57.631 17:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:57.631 17:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.573 17:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.573 17:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:58.573 17:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:58.834 true 00:05:58.834 17:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:05:58.834 17:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.773 17:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.034 17:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:00.034 17:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:00.294 true 00:06:00.294 17:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:00.294 17:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.236 17:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.236 17:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:01.236 17:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:01.498 true 00:06:01.498 17:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:01.498 17:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.441 17:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.442 17:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:02.442 17:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:02.703 true 00:06:02.703 17:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:02.703 17:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.646 17:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.646 17:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:03.646 17:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:03.646 true 00:06:03.907 17:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:03.907 17:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.742 17:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.742 17:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:04.742 17:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:05.003 true 00:06:05.003 17:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:05.003 17:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.945 17:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.207 17:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:06.207 17:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:06.207 true 00:06:06.207 17:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:06.207 17:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.151 17:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.412 17:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:07.412 17:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:07.412 true 00:06:07.412 17:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:07.412 17:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.356 17:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.617 17:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:08.617 17:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:08.617 true 00:06:08.617 17:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:08.617 17:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.559 17:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.559 17:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:09.559 17:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:09.820 true 00:06:09.820 17:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:09.820 17:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.765 17:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.765 17:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:10.765 17:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:11.025 true 00:06:11.025 17:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:11.025 17:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.967 17:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.228 17:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:12.228 17:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:12.228 true 00:06:12.488 17:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:12.488 17:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.060 17:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.322 17:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:13.322 17:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:13.583 true 00:06:13.583 17:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:13.583 17:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.526 17:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.527 17:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:14.527 17:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:14.788 true 00:06:14.788 17:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:14.788 17:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.731 17:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.732 17:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:15.732 17:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:15.993 true 00:06:15.993 17:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:15.993 17:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.936 17:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.936 17:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:16.936 17:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:17.197 true 00:06:17.197 17:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:17.197 17:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.198 17:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.458 17:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:17.458 17:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:17.718 true 00:06:17.718 17:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:17.718 17:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.718 17:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.978 17:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:17.978 17:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:18.239 true 00:06:18.239 17:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:18.239 17:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.500 17:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.500 17:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:18.500 17:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:18.761 true 00:06:18.761 Initializing NVMe Controllers 00:06:18.761 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:18.761 Controller IO queue size 128, less than required. 00:06:18.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:18.761 Controller IO queue size 128, less than required. 00:06:18.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:18.761 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:18.761 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:18.761 Initialization complete. Launching workers. 00:06:18.761 ======================================================== 00:06:18.761 Latency(us) 00:06:18.761 Device Information : IOPS MiB/s Average min max 00:06:18.761 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7717.13 3.77 14681.45 1282.99 1187466.99 00:06:18.761 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 39403.30 19.24 3248.12 1618.89 393723.71 00:06:18.761 ======================================================== 00:06:18.761 Total : 47120.43 23.01 5120.61 1282.99 1187466.99 00:06:18.761 00:06:18.761 17:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:18.761 17:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.023 17:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.023 17:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:19.023 17:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:19.284 true 00:06:19.284 17:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3352155 00:06:19.284 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3352155) - No such process 00:06:19.284 17:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3352155 00:06:19.284 17:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.545 17:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.545 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:19.545 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:19.545 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:19.545 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.545 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:19.806 null0 00:06:19.806 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.806 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.806 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:20.068 null1 00:06:20.068 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.068 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.068 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:20.068 null2 00:06:20.068 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.068 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.068 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:20.329 null3 00:06:20.329 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.329 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.329 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:20.642 null4 00:06:20.642 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.642 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.642 17:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:20.642 null5 00:06:20.642 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.642 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.642 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:20.952 null6 00:06:20.952 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.952 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.952 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:20.952 null7 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:21.218 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3359724 3359726 3359729 3359731 3359735 3359738 3359742 3359747 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.219 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.480 17:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.741 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.741 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.741 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.741 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.742 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.003 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.003 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.003 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.003 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.003 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.003 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.003 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.003 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.004 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.266 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.528 17:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.528 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.528 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.528 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.528 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.528 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.528 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.790 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.053 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.316 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.579 17:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.579 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.579 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.579 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.579 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.579 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.579 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.841 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:24.103 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:24.104 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.365 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:24.626 17:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.626 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.887 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:25.148 rmmod nvme_rdma 00:06:25.148 rmmod nvme_fabrics 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3351650 ']' 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3351650 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3351650 ']' 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3351650 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3351650 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3351650' 00:06:25.148 killing process with pid 3351650 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3351650 00:06:25.148 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3351650 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:25.410 00:06:25.410 real 0m50.278s 00:06:25.410 user 3m20.559s 00:06:25.410 sys 0m12.937s 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:25.410 ************************************ 00:06:25.410 END TEST nvmf_ns_hotplug_stress 00:06:25.410 ************************************ 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.410 ************************************ 00:06:25.410 START TEST nvmf_delete_subsystem 00:06:25.410 ************************************ 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:25.410 * Looking for test storage... 00:06:25.410 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.410 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.672 --rc genhtml_branch_coverage=1 00:06:25.672 --rc genhtml_function_coverage=1 00:06:25.672 --rc genhtml_legend=1 00:06:25.672 --rc geninfo_all_blocks=1 00:06:25.672 --rc geninfo_unexecuted_blocks=1 00:06:25.672 00:06:25.672 ' 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.672 --rc genhtml_branch_coverage=1 00:06:25.672 --rc genhtml_function_coverage=1 00:06:25.672 --rc genhtml_legend=1 00:06:25.672 --rc geninfo_all_blocks=1 00:06:25.672 --rc geninfo_unexecuted_blocks=1 00:06:25.672 00:06:25.672 ' 00:06:25.672 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.673 --rc genhtml_branch_coverage=1 00:06:25.673 --rc genhtml_function_coverage=1 00:06:25.673 --rc genhtml_legend=1 00:06:25.673 --rc geninfo_all_blocks=1 00:06:25.673 --rc geninfo_unexecuted_blocks=1 00:06:25.673 00:06:25.673 ' 00:06:25.673 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.673 --rc genhtml_branch_coverage=1 00:06:25.673 --rc genhtml_function_coverage=1 00:06:25.673 --rc genhtml_legend=1 00:06:25.673 --rc geninfo_all_blocks=1 00:06:25.673 --rc geninfo_unexecuted_blocks=1 00:06:25.673 00:06:25.673 ' 00:06:25.673 17:51:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.673 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:25.673 17:51:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:33.813 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:06:33.814 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:06:33.814 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:06:33.814 Found net devices under 0000:98:00.0: mlx_0_0 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:06:33.814 Found net devices under 0000:98:00.1: mlx_0_1 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:33.814 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:33.814 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:06:33.814 altname enp152s0f0np0 00:06:33.814 altname ens817f0np0 00:06:33.814 inet 192.168.100.8/24 scope global mlx_0_0 00:06:33.814 valid_lft forever preferred_lft forever 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:33.814 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:33.815 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:33.815 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:06:33.815 altname enp152s0f1np1 00:06:33.815 altname ens817f1np1 00:06:33.815 inet 192.168.100.9/24 scope global mlx_0_1 00:06:33.815 valid_lft forever preferred_lft forever 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:33.815 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:33.815 192.168.100.9' 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:34.075 192.168.100.9' 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:34.075 192.168.100.9' 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3364942 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3364942 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3364942 ']' 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.075 17:51:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.075 [2024-11-20 17:51:52.457557] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:34.075 [2024-11-20 17:51:52.457631] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.075 [2024-11-20 17:51:52.540966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.075 [2024-11-20 17:51:52.581374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.075 [2024-11-20 17:51:52.581414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.075 [2024-11-20 17:51:52.581422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.075 [2024-11-20 17:51:52.581429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.075 [2024-11-20 17:51:52.581435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.075 [2024-11-20 17:51:52.582708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.075 [2024-11-20 17:51:52.582711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.032 [2024-11-20 17:51:53.332531] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x136b7a0/0x136fc90) succeed. 00:06:35.032 [2024-11-20 17:51:53.345705] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x136ccf0/0x13b1330) succeed. 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.032 [2024-11-20 17:51:53.438253] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.032 NULL1 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.032 Delay0 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3365290 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:35.032 17:51:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:35.292 [2024-11-20 17:51:53.569318] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:37.205 17:51:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:37.205 17:51:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.205 17:51:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 NVMe io qpair process completion error 00:06:38.147 NVMe io qpair process completion error 00:06:38.147 NVMe io qpair process completion error 00:06:38.147 NVMe io qpair process completion error 00:06:38.147 NVMe io qpair process completion error 00:06:38.147 NVMe io qpair process completion error 00:06:38.147 17:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.147 17:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:38.147 17:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3365290 00:06:38.147 17:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:38.718 17:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:38.718 17:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3365290 00:06:38.718 17:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Read completed with error (sct=0, sc=8) 00:06:39.291 starting I/O failed: -6 00:06:39.291 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 starting I/O failed: -6 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Write completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Read completed with error (sct=0, sc=8) 00:06:39.292 Initializing NVMe Controllers 00:06:39.292 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:39.292 Controller IO queue size 128, less than required. 00:06:39.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:39.292 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:39.292 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:39.292 Initialization complete. Launching workers. 00:06:39.292 ======================================================== 00:06:39.292 Latency(us) 00:06:39.292 Device Information : IOPS MiB/s Average min max 00:06:39.292 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.78 0.04 1589304.72 1000067.37 2961337.08 00:06:39.292 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.78 0.04 1590629.15 1001419.15 2962244.11 00:06:39.292 ======================================================== 00:06:39.292 Total : 161.55 0.08 1589966.93 1000067.37 2962244.11 00:06:39.292 00:06:39.292 [2024-11-20 17:51:57.670457] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:06:39.292 17:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:39.292 17:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3365290 00:06:39.292 17:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:39.292 [2024-11-20 17:51:57.684424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:06:39.292 [2024-11-20 17:51:57.684437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:06:39.292 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3365290 00:06:39.863 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3365290) - No such process 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3365290 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3365290 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3365290 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:39.863 [2024-11-20 17:51:58.208124] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3366230 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:39.863 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:39.863 [2024-11-20 17:51:58.313515] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:40.435 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:40.435 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:40.435 17:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:41.005 17:51:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:41.005 17:51:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:41.006 17:51:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:41.267 17:51:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:41.267 17:51:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:41.267 17:51:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:41.838 17:52:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:41.838 17:52:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:41.838 17:52:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:42.410 17:52:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:42.410 17:52:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:42.410 17:52:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:42.981 17:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:42.981 17:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:42.981 17:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:43.242 17:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:43.242 17:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:43.242 17:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:43.814 17:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:43.814 17:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:43.814 17:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.385 17:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:44.385 17:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:44.385 17:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.958 17:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:44.958 17:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:44.958 17:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:45.530 17:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:45.530 17:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:45.530 17:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:45.791 17:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:45.791 17:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:45.791 17:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.365 17:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.365 17:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:46.365 17:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.938 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.938 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:46.938 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.938 Initializing NVMe Controllers 00:06:46.938 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:46.938 Controller IO queue size 128, less than required. 00:06:46.938 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:46.938 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:46.938 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:46.938 Initialization complete. Launching workers. 00:06:46.938 ======================================================== 00:06:46.938 Latency(us) 00:06:46.938 Device Information : IOPS MiB/s Average min max 00:06:46.938 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001094.96 1000040.04 1003785.65 00:06:46.938 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001679.34 1000043.56 1005665.24 00:06:46.938 ======================================================== 00:06:46.938 Total : 256.00 0.12 1001387.15 1000040.04 1005665.24 00:06:46.938 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3366230 00:06:47.511 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3366230) - No such process 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3366230 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:47.511 rmmod nvme_rdma 00:06:47.511 rmmod nvme_fabrics 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3364942 ']' 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3364942 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3364942 ']' 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3364942 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3364942 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3364942' 00:06:47.511 killing process with pid 3364942 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3364942 00:06:47.511 17:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3364942 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:47.773 00:06:47.773 real 0m22.277s 00:06:47.773 user 0m50.657s 00:06:47.773 sys 0m7.473s 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:47.773 ************************************ 00:06:47.773 END TEST nvmf_delete_subsystem 00:06:47.773 ************************************ 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:47.773 ************************************ 00:06:47.773 START TEST nvmf_host_management 00:06:47.773 ************************************ 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:06:47.773 * Looking for test storage... 00:06:47.773 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.773 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.035 --rc genhtml_branch_coverage=1 00:06:48.035 --rc genhtml_function_coverage=1 00:06:48.035 --rc genhtml_legend=1 00:06:48.035 --rc geninfo_all_blocks=1 00:06:48.035 --rc geninfo_unexecuted_blocks=1 00:06:48.035 00:06:48.035 ' 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.035 --rc genhtml_branch_coverage=1 00:06:48.035 --rc genhtml_function_coverage=1 00:06:48.035 --rc genhtml_legend=1 00:06:48.035 --rc geninfo_all_blocks=1 00:06:48.035 --rc geninfo_unexecuted_blocks=1 00:06:48.035 00:06:48.035 ' 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.035 --rc genhtml_branch_coverage=1 00:06:48.035 --rc genhtml_function_coverage=1 00:06:48.035 --rc genhtml_legend=1 00:06:48.035 --rc geninfo_all_blocks=1 00:06:48.035 --rc geninfo_unexecuted_blocks=1 00:06:48.035 00:06:48.035 ' 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.035 --rc genhtml_branch_coverage=1 00:06:48.035 --rc genhtml_function_coverage=1 00:06:48.035 --rc genhtml_legend=1 00:06:48.035 --rc geninfo_all_blocks=1 00:06:48.035 --rc geninfo_unexecuted_blocks=1 00:06:48.035 00:06:48.035 ' 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.035 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.036 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:48.036 17:52:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:06:56.185 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:06:56.185 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:06:56.185 Found net devices under 0000:98:00.0: mlx_0_0 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:06:56.185 Found net devices under 0000:98:00.1: mlx_0_1 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:56.185 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:56.186 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:56.186 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:06:56.186 altname enp152s0f0np0 00:06:56.186 altname ens817f0np0 00:06:56.186 inet 192.168.100.8/24 scope global mlx_0_0 00:06:56.186 valid_lft forever preferred_lft forever 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:56.186 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:56.186 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:06:56.186 altname enp152s0f1np1 00:06:56.186 altname ens817f1np1 00:06:56.186 inet 192.168.100.9/24 scope global mlx_0_1 00:06:56.186 valid_lft forever preferred_lft forever 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:56.186 192.168.100.9' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:56.186 192.168.100.9' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:56.186 192.168.100.9' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3372023 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3372023 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3372023 ']' 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.186 17:52:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.186 [2024-11-20 17:52:14.712405] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:56.187 [2024-11-20 17:52:14.712455] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.448 [2024-11-20 17:52:14.808883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.448 [2024-11-20 17:52:14.856210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.448 [2024-11-20 17:52:14.856265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.448 [2024-11-20 17:52:14.856274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.448 [2024-11-20 17:52:14.856281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.448 [2024-11-20 17:52:14.856287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.448 [2024-11-20 17:52:14.858647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.448 [2024-11-20 17:52:14.858942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.448 [2024-11-20 17:52:14.858809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.448 [2024-11-20 17:52:14.858941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:57.020 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.020 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:57.020 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:57.020 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.020 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.281 [2024-11-20 17:52:15.596073] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x149b190/0x149f680) succeed. 00:06:57.281 [2024-11-20 17:52:15.610541] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x149c820/0x14e0d20) succeed. 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.281 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.281 Malloc0 00:06:57.281 [2024-11-20 17:52:15.805687] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3372397 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3372397 /var/tmp/bdevperf.sock 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3372397 ']' 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:57.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:57.542 { 00:06:57.542 "params": { 00:06:57.542 "name": "Nvme$subsystem", 00:06:57.542 "trtype": "$TEST_TRANSPORT", 00:06:57.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:57.542 "adrfam": "ipv4", 00:06:57.542 "trsvcid": "$NVMF_PORT", 00:06:57.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:57.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:57.542 "hdgst": ${hdgst:-false}, 00:06:57.542 "ddgst": ${ddgst:-false} 00:06:57.542 }, 00:06:57.542 "method": "bdev_nvme_attach_controller" 00:06:57.542 } 00:06:57.542 EOF 00:06:57.542 )") 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:57.542 17:52:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:57.542 "params": { 00:06:57.542 "name": "Nvme0", 00:06:57.542 "trtype": "rdma", 00:06:57.542 "traddr": "192.168.100.8", 00:06:57.542 "adrfam": "ipv4", 00:06:57.542 "trsvcid": "4420", 00:06:57.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:57.542 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:57.542 "hdgst": false, 00:06:57.542 "ddgst": false 00:06:57.542 }, 00:06:57.542 "method": "bdev_nvme_attach_controller" 00:06:57.542 }' 00:06:57.542 [2024-11-20 17:52:15.915193] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:57.542 [2024-11-20 17:52:15.915257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372397 ] 00:06:57.542 [2024-11-20 17:52:15.998378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.542 [2024-11-20 17:52:16.034695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.804 Running I/O for 10 seconds... 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1263 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1263 -ge 100 ']' 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.377 17:52:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:59.523 1388.00 IOPS, 86.75 MiB/s [2024-11-20T16:52:18.059Z] [2024-11-20 17:52:17.813144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff80 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcff00 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfe80 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafe00 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fd80 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fd00 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fc80 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6fc00 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5fb80 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4fb00 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3fa80 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2fa00 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f980 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f900 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff880 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef800 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf780 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.523 [2024-11-20 17:52:17.813485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf700 len:0x10000 key:0x182900 00:06:59.523 [2024-11-20 17:52:17.813492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf680 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf600 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f580 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8f500 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7f480 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6f400 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5f380 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4f300 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3f280 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2f200 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1f180 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0f100 len:0x10000 key:0x182900 00:06:59.524 [2024-11-20 17:52:17.813700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000df0000 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff80 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcff00 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfe80 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafe00 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fd80 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fd00 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fc80 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6fc00 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5fb80 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d4fb00 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d3fa80 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d2fa00 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d1f980 len:0x10000 key:0x182a00 00:06:59.524 [2024-11-20 17:52:17.813943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008494000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.813961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000084b5000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.813980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.813989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000084d6000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.813997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000084f7000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008518000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008539000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000855a000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000857b000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010264000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010285000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102a6000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102c7000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a82000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008aa3000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008cd4000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008cf5000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008851000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008830000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c71000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.814279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c50000 len:0x10000 key:0x182400 00:06:59.524 [2024-11-20 17:52:17.814286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4ac80000 sqhd:7250 p:0 m:0 dnr:0 00:06:59.524 [2024-11-20 17:52:17.817814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:59.524 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3372397 00:06:59.524 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:59.524 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:59.524 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:59.525 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:59.525 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:59.525 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:59.525 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:59.525 { 00:06:59.525 "params": { 00:06:59.525 "name": "Nvme$subsystem", 00:06:59.525 "trtype": "$TEST_TRANSPORT", 00:06:59.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:59.525 "adrfam": "ipv4", 00:06:59.525 "trsvcid": "$NVMF_PORT", 00:06:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:59.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:59.525 "hdgst": ${hdgst:-false}, 00:06:59.525 "ddgst": ${ddgst:-false} 00:06:59.525 }, 00:06:59.525 "method": "bdev_nvme_attach_controller" 00:06:59.525 } 00:06:59.525 EOF 00:06:59.525 )") 00:06:59.525 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:59.525 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:59.525 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:59.525 17:52:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:59.525 "params": { 00:06:59.525 "name": "Nvme0", 00:06:59.525 "trtype": "rdma", 00:06:59.525 "traddr": "192.168.100.8", 00:06:59.525 "adrfam": "ipv4", 00:06:59.525 "trsvcid": "4420", 00:06:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:59.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:59.525 "hdgst": false, 00:06:59.525 "ddgst": false 00:06:59.525 }, 00:06:59.525 "method": "bdev_nvme_attach_controller" 00:06:59.525 }' 00:06:59.525 [2024-11-20 17:52:17.870987] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:59.525 [2024-11-20 17:52:17.871040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372747 ] 00:06:59.525 [2024-11-20 17:52:17.949340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.525 [2024-11-20 17:52:17.985569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.786 Running I/O for 1 seconds... 00:07:00.730 2412.00 IOPS, 150.75 MiB/s 00:07:00.730 Latency(us) 00:07:00.730 [2024-11-20T16:52:19.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.730 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:00.730 Verification LBA range: start 0x0 length 0x400 00:07:00.730 Nvme0n1 : 1.01 2459.31 153.71 0.00 0.00 25448.42 1037.65 44564.48 00:07:00.730 [2024-11-20T16:52:19.266Z] =================================================================================================================== 00:07:00.730 [2024-11-20T16:52:19.266Z] Total : 2459.31 153.71 0.00 0.00 25448.42 1037.65 44564.48 00:07:00.992 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3372397 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:00.992 rmmod nvme_rdma 00:07:00.992 rmmod nvme_fabrics 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3372023 ']' 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3372023 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3372023 ']' 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3372023 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3372023 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3372023' 00:07:00.992 killing process with pid 3372023 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3372023 00:07:00.992 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3372023 00:07:01.253 [2024-11-20 17:52:19.632578] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:01.253 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:01.253 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:01.253 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:01.253 00:07:01.253 real 0m13.494s 00:07:01.253 user 0m25.008s 00:07:01.253 sys 0m7.086s 00:07:01.253 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.253 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.253 ************************************ 00:07:01.253 END TEST nvmf_host_management 00:07:01.253 ************************************ 00:07:01.253 17:52:19 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:01.253 17:52:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.253 17:52:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.253 17:52:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.253 ************************************ 00:07:01.253 START TEST nvmf_lvol 00:07:01.253 ************************************ 00:07:01.253 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:01.516 * Looking for test storage... 00:07:01.516 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.516 --rc genhtml_branch_coverage=1 00:07:01.516 --rc genhtml_function_coverage=1 00:07:01.516 --rc genhtml_legend=1 00:07:01.516 --rc geninfo_all_blocks=1 00:07:01.516 --rc geninfo_unexecuted_blocks=1 00:07:01.516 00:07:01.516 ' 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.516 --rc genhtml_branch_coverage=1 00:07:01.516 --rc genhtml_function_coverage=1 00:07:01.516 --rc genhtml_legend=1 00:07:01.516 --rc geninfo_all_blocks=1 00:07:01.516 --rc geninfo_unexecuted_blocks=1 00:07:01.516 00:07:01.516 ' 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.516 --rc genhtml_branch_coverage=1 00:07:01.516 --rc genhtml_function_coverage=1 00:07:01.516 --rc genhtml_legend=1 00:07:01.516 --rc geninfo_all_blocks=1 00:07:01.516 --rc geninfo_unexecuted_blocks=1 00:07:01.516 00:07:01.516 ' 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.516 --rc genhtml_branch_coverage=1 00:07:01.516 --rc genhtml_function_coverage=1 00:07:01.516 --rc genhtml_legend=1 00:07:01.516 --rc geninfo_all_blocks=1 00:07:01.516 --rc geninfo_unexecuted_blocks=1 00:07:01.516 00:07:01.516 ' 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.516 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.517 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.517 17:52:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:07:09.663 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:07:09.663 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:07:09.663 Found net devices under 0000:98:00.0: mlx_0_0 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:07:09.663 Found net devices under 0000:98:00.1: mlx_0_1 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:09.663 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:09.664 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:09.664 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:09.664 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:09.664 17:52:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:09.664 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:09.664 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:07:09.664 altname enp152s0f0np0 00:07:09.664 altname ens817f0np0 00:07:09.664 inet 192.168.100.8/24 scope global mlx_0_0 00:07:09.664 valid_lft forever preferred_lft forever 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:09.664 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:09.664 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:07:09.664 altname enp152s0f1np1 00:07:09.664 altname ens817f1np1 00:07:09.664 inet 192.168.100.9/24 scope global mlx_0_1 00:07:09.664 valid_lft forever preferred_lft forever 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:09.664 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:09.665 192.168.100.9' 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:09.665 192.168.100.9' 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:09.665 192.168.100.9' 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3377446 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3377446 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3377446 ']' 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.665 17:52:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:09.927 [2024-11-20 17:52:28.235524] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:09.927 [2024-11-20 17:52:28.235595] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.927 [2024-11-20 17:52:28.319116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.927 [2024-11-20 17:52:28.359588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.927 [2024-11-20 17:52:28.359626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.927 [2024-11-20 17:52:28.359635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.927 [2024-11-20 17:52:28.359642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.927 [2024-11-20 17:52:28.359652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.927 [2024-11-20 17:52:28.361119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.927 [2024-11-20 17:52:28.361241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.927 [2024-11-20 17:52:28.361243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.866 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.866 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:10.866 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:10.866 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.866 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.866 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.866 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:10.866 [2024-11-20 17:52:29.270316] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1946360/0x194a850) succeed. 00:07:10.866 [2024-11-20 17:52:29.283934] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1947950/0x198bef0) succeed. 00:07:11.126 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:11.126 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:11.126 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:11.386 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:11.386 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:11.647 17:52:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:11.908 17:52:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fd187398-0fdf-4d73-be16-60f3fb210d7c 00:07:11.908 17:52:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd187398-0fdf-4d73-be16-60f3fb210d7c lvol 20 00:07:11.908 17:52:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=09f0c699-0938-431f-9ccf-2940e6409d3a 00:07:11.908 17:52:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:12.169 17:52:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 09f0c699-0938-431f-9ccf-2940e6409d3a 00:07:12.429 17:52:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:12.429 [2024-11-20 17:52:30.870900] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:12.429 17:52:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:12.690 17:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3378148 00:07:12.690 17:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:12.690 17:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:13.632 17:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 09f0c699-0938-431f-9ccf-2940e6409d3a MY_SNAPSHOT 00:07:13.892 17:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8c452952-c4a9-4c30-968f-acaec09b449b 00:07:13.892 17:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 09f0c699-0938-431f-9ccf-2940e6409d3a 30 00:07:14.153 17:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8c452952-c4a9-4c30-968f-acaec09b449b MY_CLONE 00:07:14.153 17:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=12742aae-e1b3-4c77-aa8d-c8125562fe71 00:07:14.153 17:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 12742aae-e1b3-4c77-aa8d-c8125562fe71 00:07:14.428 17:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3378148 00:07:24.542 Initializing NVMe Controllers 00:07:24.542 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:24.542 Controller IO queue size 128, less than required. 00:07:24.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:24.542 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:24.542 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:24.542 Initialization complete. Launching workers. 00:07:24.542 ======================================================== 00:07:24.542 Latency(us) 00:07:24.542 Device Information : IOPS MiB/s Average min max 00:07:24.542 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 22603.30 88.29 5663.31 2350.54 35538.97 00:07:24.542 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 22846.80 89.25 5602.80 2844.58 37921.39 00:07:24.542 ======================================================== 00:07:24.542 Total : 45450.10 177.54 5632.90 2350.54 37921.39 00:07:24.542 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 09f0c699-0938-431f-9ccf-2940e6409d3a 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd187398-0fdf-4d73-be16-60f3fb210d7c 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.542 17:52:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:24.542 rmmod nvme_rdma 00:07:24.542 rmmod nvme_fabrics 00:07:24.542 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.542 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:24.542 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:24.542 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3377446 ']' 00:07:24.542 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3377446 00:07:24.542 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3377446 ']' 00:07:24.542 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3377446 00:07:24.542 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:24.542 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.542 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3377446 00:07:24.803 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.803 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.803 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3377446' 00:07:24.803 killing process with pid 3377446 00:07:24.803 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3377446 00:07:24.803 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3377446 00:07:24.803 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.803 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:24.803 00:07:24.803 real 0m23.578s 00:07:24.803 user 1m11.887s 00:07:24.803 sys 0m7.136s 00:07:24.803 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.803 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:24.803 ************************************ 00:07:24.803 END TEST nvmf_lvol 00:07:24.803 ************************************ 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.066 ************************************ 00:07:25.066 START TEST nvmf_lvs_grow 00:07:25.066 ************************************ 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:25.066 * Looking for test storage... 00:07:25.066 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.066 --rc genhtml_branch_coverage=1 00:07:25.066 --rc genhtml_function_coverage=1 00:07:25.066 --rc genhtml_legend=1 00:07:25.066 --rc geninfo_all_blocks=1 00:07:25.066 --rc geninfo_unexecuted_blocks=1 00:07:25.066 00:07:25.066 ' 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.066 --rc genhtml_branch_coverage=1 00:07:25.066 --rc genhtml_function_coverage=1 00:07:25.066 --rc genhtml_legend=1 00:07:25.066 --rc geninfo_all_blocks=1 00:07:25.066 --rc geninfo_unexecuted_blocks=1 00:07:25.066 00:07:25.066 ' 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.066 --rc genhtml_branch_coverage=1 00:07:25.066 --rc genhtml_function_coverage=1 00:07:25.066 --rc genhtml_legend=1 00:07:25.066 --rc geninfo_all_blocks=1 00:07:25.066 --rc geninfo_unexecuted_blocks=1 00:07:25.066 00:07:25.066 ' 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.066 --rc genhtml_branch_coverage=1 00:07:25.066 --rc genhtml_function_coverage=1 00:07:25.066 --rc genhtml_legend=1 00:07:25.066 --rc geninfo_all_blocks=1 00:07:25.066 --rc geninfo_unexecuted_blocks=1 00:07:25.066 00:07:25.066 ' 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.066 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:25.067 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.328 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.328 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.328 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.328 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.329 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:25.329 17:52:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:07:33.473 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:07:33.473 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:33.473 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:07:33.474 Found net devices under 0000:98:00.0: mlx_0_0 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:07:33.474 Found net devices under 0000:98:00.1: mlx_0_1 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:33.474 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:33.474 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:07:33.474 altname enp152s0f0np0 00:07:33.474 altname ens817f0np0 00:07:33.474 inet 192.168.100.8/24 scope global mlx_0_0 00:07:33.474 valid_lft forever preferred_lft forever 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:33.474 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:33.474 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:07:33.474 altname enp152s0f1np1 00:07:33.474 altname ens817f1np1 00:07:33.474 inet 192.168.100.9/24 scope global mlx_0_1 00:07:33.474 valid_lft forever preferred_lft forever 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:33.474 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:33.475 192.168.100.9' 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:33.475 192.168.100.9' 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:33.475 192.168.100.9' 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3384853 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3384853 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3384853 ']' 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.475 17:52:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.475 [2024-11-20 17:52:51.692661] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:33.475 [2024-11-20 17:52:51.692735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.475 [2024-11-20 17:52:51.775461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.475 [2024-11-20 17:52:51.815375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.475 [2024-11-20 17:52:51.815413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.475 [2024-11-20 17:52:51.815421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.475 [2024-11-20 17:52:51.815428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.475 [2024-11-20 17:52:51.815434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.475 [2024-11-20 17:52:51.816046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.047 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.047 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:34.047 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:34.047 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.047 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.047 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.047 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:34.308 [2024-11-20 17:52:52.733649] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e8cc80/0x1e91170) succeed. 00:07:34.308 [2024-11-20 17:52:52.746753] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e8e130/0x1ed2810) succeed. 00:07:34.308 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:34.308 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.308 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.308 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.568 ************************************ 00:07:34.568 START TEST lvs_grow_clean 00:07:34.568 ************************************ 00:07:34.568 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:34.568 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:34.568 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:34.568 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:34.568 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:34.568 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:34.568 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:34.568 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.568 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.568 17:52:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:34.568 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:34.569 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:34.830 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:34.830 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:34.830 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:35.091 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:35.091 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:35.091 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e3cabd8f-02dd-4306-b304-f693e46c69a4 lvol 150 00:07:35.091 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8f947fbc-c0f3-43fb-940f-bc9e5b79eb0d 00:07:35.091 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.091 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:35.353 [2024-11-20 17:52:53.771214] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:35.353 [2024-11-20 17:52:53.771270] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:35.353 true 00:07:35.353 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:35.353 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:35.614 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:35.614 17:52:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:35.614 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8f947fbc-c0f3-43fb-940f-bc9e5b79eb0d 00:07:35.875 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:36.135 [2024-11-20 17:52:54.445480] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3385360 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3385360 /var/tmp/bdevperf.sock 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3385360 ']' 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:36.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.135 17:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:36.395 [2024-11-20 17:52:54.688742] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:36.395 [2024-11-20 17:52:54.688810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385360 ] 00:07:36.395 [2024-11-20 17:52:54.783028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.395 [2024-11-20 17:52:54.818998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.968 17:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.968 17:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:36.968 17:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:37.229 Nvme0n1 00:07:37.229 17:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:37.490 [ 00:07:37.490 { 00:07:37.490 "name": "Nvme0n1", 00:07:37.490 "aliases": [ 00:07:37.490 "8f947fbc-c0f3-43fb-940f-bc9e5b79eb0d" 00:07:37.490 ], 00:07:37.490 "product_name": "NVMe disk", 00:07:37.490 "block_size": 4096, 00:07:37.490 "num_blocks": 38912, 00:07:37.490 "uuid": "8f947fbc-c0f3-43fb-940f-bc9e5b79eb0d", 00:07:37.490 "numa_id": 1, 00:07:37.490 "assigned_rate_limits": { 00:07:37.490 "rw_ios_per_sec": 0, 00:07:37.490 "rw_mbytes_per_sec": 0, 00:07:37.490 "r_mbytes_per_sec": 0, 00:07:37.490 "w_mbytes_per_sec": 0 00:07:37.490 }, 00:07:37.490 "claimed": false, 00:07:37.490 "zoned": false, 00:07:37.490 "supported_io_types": { 00:07:37.490 "read": true, 00:07:37.490 "write": true, 00:07:37.490 "unmap": true, 00:07:37.490 "flush": true, 00:07:37.490 "reset": true, 00:07:37.490 "nvme_admin": true, 00:07:37.490 "nvme_io": true, 00:07:37.490 "nvme_io_md": false, 00:07:37.490 "write_zeroes": true, 00:07:37.490 "zcopy": false, 00:07:37.490 "get_zone_info": false, 00:07:37.490 "zone_management": false, 00:07:37.490 "zone_append": false, 00:07:37.490 "compare": true, 00:07:37.490 "compare_and_write": true, 00:07:37.490 "abort": true, 00:07:37.490 "seek_hole": false, 00:07:37.490 "seek_data": false, 00:07:37.490 "copy": true, 00:07:37.490 "nvme_iov_md": false 00:07:37.490 }, 00:07:37.490 "memory_domains": [ 00:07:37.490 { 00:07:37.490 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:37.490 "dma_device_type": 0 00:07:37.490 } 00:07:37.490 ], 00:07:37.490 "driver_specific": { 00:07:37.490 "nvme": [ 00:07:37.490 { 00:07:37.490 "trid": { 00:07:37.490 "trtype": "RDMA", 00:07:37.490 "adrfam": "IPv4", 00:07:37.490 "traddr": "192.168.100.8", 00:07:37.490 "trsvcid": "4420", 00:07:37.490 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:37.490 }, 00:07:37.490 "ctrlr_data": { 00:07:37.490 "cntlid": 1, 00:07:37.490 "vendor_id": "0x8086", 00:07:37.490 "model_number": "SPDK bdev Controller", 00:07:37.490 "serial_number": "SPDK0", 00:07:37.490 "firmware_revision": "25.01", 00:07:37.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:37.490 "oacs": { 00:07:37.490 "security": 0, 00:07:37.490 "format": 0, 00:07:37.490 "firmware": 0, 00:07:37.490 "ns_manage": 0 00:07:37.490 }, 00:07:37.490 "multi_ctrlr": true, 00:07:37.490 "ana_reporting": false 00:07:37.490 }, 00:07:37.490 "vs": { 00:07:37.490 "nvme_version": "1.3" 00:07:37.490 }, 00:07:37.490 "ns_data": { 00:07:37.490 "id": 1, 00:07:37.490 "can_share": true 00:07:37.490 } 00:07:37.490 } 00:07:37.490 ], 00:07:37.490 "mp_policy": "active_passive" 00:07:37.490 } 00:07:37.490 } 00:07:37.490 ] 00:07:37.490 17:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3385595 00:07:37.490 17:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:37.490 17:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:37.490 Running I/O for 10 seconds... 00:07:38.878 Latency(us) 00:07:38.878 [2024-11-20T16:52:57.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.878 Nvme0n1 : 1.00 25599.00 100.00 0.00 0.00 0.00 0.00 0.00 00:07:38.878 [2024-11-20T16:52:57.414Z] =================================================================================================================== 00:07:38.878 [2024-11-20T16:52:57.414Z] Total : 25599.00 100.00 0.00 0.00 0.00 0.00 0.00 00:07:38.878 00:07:39.451 17:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:39.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.713 Nvme0n1 : 2.00 25824.00 100.88 0.00 0.00 0.00 0.00 0.00 00:07:39.713 [2024-11-20T16:52:58.249Z] =================================================================================================================== 00:07:39.713 [2024-11-20T16:52:58.249Z] Total : 25824.00 100.88 0.00 0.00 0.00 0.00 0.00 00:07:39.713 00:07:39.713 true 00:07:39.713 17:52:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:39.713 17:52:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:39.974 17:52:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:39.974 17:52:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:39.974 17:52:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3385595 00:07:40.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.547 Nvme0n1 : 3.00 25909.33 101.21 0.00 0.00 0.00 0.00 0.00 00:07:40.547 [2024-11-20T16:52:59.083Z] =================================================================================================================== 00:07:40.547 [2024-11-20T16:52:59.083Z] Total : 25909.33 101.21 0.00 0.00 0.00 0.00 0.00 00:07:40.547 00:07:41.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.492 Nvme0n1 : 4.00 25975.50 101.47 0.00 0.00 0.00 0.00 0.00 00:07:41.492 [2024-11-20T16:53:00.028Z] =================================================================================================================== 00:07:41.492 [2024-11-20T16:53:00.028Z] Total : 25975.50 101.47 0.00 0.00 0.00 0.00 0.00 00:07:41.492 00:07:42.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.878 Nvme0n1 : 5.00 26015.40 101.62 0.00 0.00 0.00 0.00 0.00 00:07:42.878 [2024-11-20T16:53:01.414Z] =================================================================================================================== 00:07:42.878 [2024-11-20T16:53:01.414Z] Total : 26015.40 101.62 0.00 0.00 0.00 0.00 0.00 00:07:42.878 00:07:43.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.821 Nvme0n1 : 6.00 26047.50 101.75 0.00 0.00 0.00 0.00 0.00 00:07:43.821 [2024-11-20T16:53:02.357Z] =================================================================================================================== 00:07:43.821 [2024-11-20T16:53:02.357Z] Total : 26047.50 101.75 0.00 0.00 0.00 0.00 0.00 00:07:43.821 00:07:44.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.763 Nvme0n1 : 7.00 26074.86 101.85 0.00 0.00 0.00 0.00 0.00 00:07:44.763 [2024-11-20T16:53:03.299Z] =================================================================================================================== 00:07:44.763 [2024-11-20T16:53:03.299Z] Total : 26074.86 101.85 0.00 0.00 0.00 0.00 0.00 00:07:44.763 00:07:45.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.707 Nvme0n1 : 8.00 26095.50 101.94 0.00 0.00 0.00 0.00 0.00 00:07:45.707 [2024-11-20T16:53:04.243Z] =================================================================================================================== 00:07:45.707 [2024-11-20T16:53:04.243Z] Total : 26095.50 101.94 0.00 0.00 0.00 0.00 0.00 00:07:45.707 00:07:46.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.649 Nvme0n1 : 9.00 26112.22 102.00 0.00 0.00 0.00 0.00 0.00 00:07:46.649 [2024-11-20T16:53:05.185Z] =================================================================================================================== 00:07:46.649 [2024-11-20T16:53:05.185Z] Total : 26112.22 102.00 0.00 0.00 0.00 0.00 0.00 00:07:46.649 00:07:47.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.592 Nvme0n1 : 10.00 26127.60 102.06 0.00 0.00 0.00 0.00 0.00 00:07:47.592 [2024-11-20T16:53:06.128Z] =================================================================================================================== 00:07:47.592 [2024-11-20T16:53:06.128Z] Total : 26127.60 102.06 0.00 0.00 0.00 0.00 0.00 00:07:47.592 00:07:47.592 00:07:47.592 Latency(us) 00:07:47.592 [2024-11-20T16:53:06.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.592 Nvme0n1 : 10.00 26126.88 102.06 0.00 0.00 4895.28 3659.09 14199.47 00:07:47.592 [2024-11-20T16:53:06.128Z] =================================================================================================================== 00:07:47.592 [2024-11-20T16:53:06.128Z] Total : 26126.88 102.06 0.00 0.00 4895.28 3659.09 14199.47 00:07:47.592 { 00:07:47.592 "results": [ 00:07:47.592 { 00:07:47.592 "job": "Nvme0n1", 00:07:47.592 "core_mask": "0x2", 00:07:47.592 "workload": "randwrite", 00:07:47.592 "status": "finished", 00:07:47.592 "queue_depth": 128, 00:07:47.592 "io_size": 4096, 00:07:47.592 "runtime": 10.004143, 00:07:47.592 "iops": 26126.875635424243, 00:07:47.592 "mibps": 102.05810795087595, 00:07:47.593 "io_failed": 0, 00:07:47.593 "io_timeout": 0, 00:07:47.593 "avg_latency_us": 4895.279579968143, 00:07:47.593 "min_latency_us": 3659.0933333333332, 00:07:47.593 "max_latency_us": 14199.466666666667 00:07:47.593 } 00:07:47.593 ], 00:07:47.593 "core_count": 1 00:07:47.593 } 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3385360 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3385360 ']' 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3385360 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3385360 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3385360' 00:07:47.593 killing process with pid 3385360 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3385360 00:07:47.593 Received shutdown signal, test time was about 10.000000 seconds 00:07:47.593 00:07:47.593 Latency(us) 00:07:47.593 [2024-11-20T16:53:06.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.593 [2024-11-20T16:53:06.129Z] =================================================================================================================== 00:07:47.593 [2024-11-20T16:53:06.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:47.593 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3385360 00:07:47.854 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:48.116 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:48.116 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:48.116 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:48.377 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:48.377 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:48.377 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:48.638 [2024-11-20 17:53:06.914803] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:48.638 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:48.638 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:48.639 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:48.639 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:48.639 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.639 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:48.639 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.639 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:48.639 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.639 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:48.639 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:48.639 17:53:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:48.639 request: 00:07:48.639 { 00:07:48.639 "uuid": "e3cabd8f-02dd-4306-b304-f693e46c69a4", 00:07:48.639 "method": "bdev_lvol_get_lvstores", 00:07:48.639 "req_id": 1 00:07:48.639 } 00:07:48.639 Got JSON-RPC error response 00:07:48.639 response: 00:07:48.639 { 00:07:48.639 "code": -19, 00:07:48.639 "message": "No such device" 00:07:48.639 } 00:07:48.639 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:48.639 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:48.639 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:48.639 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:48.639 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:48.899 aio_bdev 00:07:48.899 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8f947fbc-c0f3-43fb-940f-bc9e5b79eb0d 00:07:48.899 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8f947fbc-c0f3-43fb-940f-bc9e5b79eb0d 00:07:48.899 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:48.899 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:48.899 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:48.899 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:48.899 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:49.160 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8f947fbc-c0f3-43fb-940f-bc9e5b79eb0d -t 2000 00:07:49.160 [ 00:07:49.160 { 00:07:49.160 "name": "8f947fbc-c0f3-43fb-940f-bc9e5b79eb0d", 00:07:49.160 "aliases": [ 00:07:49.160 "lvs/lvol" 00:07:49.160 ], 00:07:49.160 "product_name": "Logical Volume", 00:07:49.160 "block_size": 4096, 00:07:49.160 "num_blocks": 38912, 00:07:49.160 "uuid": "8f947fbc-c0f3-43fb-940f-bc9e5b79eb0d", 00:07:49.160 "assigned_rate_limits": { 00:07:49.160 "rw_ios_per_sec": 0, 00:07:49.160 "rw_mbytes_per_sec": 0, 00:07:49.160 "r_mbytes_per_sec": 0, 00:07:49.160 "w_mbytes_per_sec": 0 00:07:49.160 }, 00:07:49.160 "claimed": false, 00:07:49.160 "zoned": false, 00:07:49.160 "supported_io_types": { 00:07:49.160 "read": true, 00:07:49.160 "write": true, 00:07:49.160 "unmap": true, 00:07:49.160 "flush": false, 00:07:49.160 "reset": true, 00:07:49.160 "nvme_admin": false, 00:07:49.160 "nvme_io": false, 00:07:49.160 "nvme_io_md": false, 00:07:49.160 "write_zeroes": true, 00:07:49.160 "zcopy": false, 00:07:49.160 "get_zone_info": false, 00:07:49.160 "zone_management": false, 00:07:49.160 "zone_append": false, 00:07:49.160 "compare": false, 00:07:49.160 "compare_and_write": false, 00:07:49.160 "abort": false, 00:07:49.160 "seek_hole": true, 00:07:49.160 "seek_data": true, 00:07:49.160 "copy": false, 00:07:49.160 "nvme_iov_md": false 00:07:49.160 }, 00:07:49.160 "driver_specific": { 00:07:49.160 "lvol": { 00:07:49.160 "lvol_store_uuid": "e3cabd8f-02dd-4306-b304-f693e46c69a4", 00:07:49.160 "base_bdev": "aio_bdev", 00:07:49.160 "thin_provision": false, 00:07:49.160 "num_allocated_clusters": 38, 00:07:49.160 "snapshot": false, 00:07:49.160 "clone": false, 00:07:49.160 "esnap_clone": false 00:07:49.160 } 00:07:49.160 } 00:07:49.160 } 00:07:49.160 ] 00:07:49.160 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:49.160 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:49.160 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:49.421 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:49.421 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:49.421 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:49.682 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:49.682 17:53:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8f947fbc-c0f3-43fb-940f-bc9e5b79eb0d 00:07:49.682 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e3cabd8f-02dd-4306-b304-f693e46c69a4 00:07:49.943 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.203 00:07:50.203 real 0m15.681s 00:07:50.203 user 0m15.679s 00:07:50.203 sys 0m0.981s 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:50.203 ************************************ 00:07:50.203 END TEST lvs_grow_clean 00:07:50.203 ************************************ 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.203 ************************************ 00:07:50.203 START TEST lvs_grow_dirty 00:07:50.203 ************************************ 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.203 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.465 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:50.465 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:50.465 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=592ffca4-2339-4321-990f-7045fac810f4 00:07:50.465 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:07:50.465 17:53:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:50.727 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:50.727 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:50.727 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 592ffca4-2339-4321-990f-7045fac810f4 lvol 150 00:07:50.987 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b7809c5d-7b09-462d-bd36-077abd909ace 00:07:50.987 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.987 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:50.987 [2024-11-20 17:53:09.458117] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:50.987 [2024-11-20 17:53:09.458168] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:50.987 true 00:07:50.987 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:07:50.987 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:51.247 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:51.247 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:51.508 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b7809c5d-7b09-462d-bd36-077abd909ace 00:07:51.508 17:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:51.770 [2024-11-20 17:53:10.108314] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3388646 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3388646 /var/tmp/bdevperf.sock 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3388646 ']' 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.770 17:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.031 [2024-11-20 17:53:10.340603] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:52.031 [2024-11-20 17:53:10.340655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388646 ] 00:07:52.031 [2024-11-20 17:53:10.432662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.031 [2024-11-20 17:53:10.468682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.974 17:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.974 17:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:52.975 17:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:52.975 Nvme0n1 00:07:52.975 17:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:53.236 [ 00:07:53.236 { 00:07:53.236 "name": "Nvme0n1", 00:07:53.236 "aliases": [ 00:07:53.236 "b7809c5d-7b09-462d-bd36-077abd909ace" 00:07:53.236 ], 00:07:53.236 "product_name": "NVMe disk", 00:07:53.236 "block_size": 4096, 00:07:53.236 "num_blocks": 38912, 00:07:53.236 "uuid": "b7809c5d-7b09-462d-bd36-077abd909ace", 00:07:53.236 "numa_id": 1, 00:07:53.236 "assigned_rate_limits": { 00:07:53.236 "rw_ios_per_sec": 0, 00:07:53.236 "rw_mbytes_per_sec": 0, 00:07:53.236 "r_mbytes_per_sec": 0, 00:07:53.236 "w_mbytes_per_sec": 0 00:07:53.236 }, 00:07:53.236 "claimed": false, 00:07:53.236 "zoned": false, 00:07:53.236 "supported_io_types": { 00:07:53.236 "read": true, 00:07:53.236 "write": true, 00:07:53.236 "unmap": true, 00:07:53.236 "flush": true, 00:07:53.236 "reset": true, 00:07:53.236 "nvme_admin": true, 00:07:53.236 "nvme_io": true, 00:07:53.236 "nvme_io_md": false, 00:07:53.236 "write_zeroes": true, 00:07:53.236 "zcopy": false, 00:07:53.236 "get_zone_info": false, 00:07:53.236 "zone_management": false, 00:07:53.236 "zone_append": false, 00:07:53.236 "compare": true, 00:07:53.236 "compare_and_write": true, 00:07:53.236 "abort": true, 00:07:53.236 "seek_hole": false, 00:07:53.236 "seek_data": false, 00:07:53.236 "copy": true, 00:07:53.236 "nvme_iov_md": false 00:07:53.236 }, 00:07:53.236 "memory_domains": [ 00:07:53.236 { 00:07:53.236 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:53.236 "dma_device_type": 0 00:07:53.236 } 00:07:53.236 ], 00:07:53.236 "driver_specific": { 00:07:53.236 "nvme": [ 00:07:53.236 { 00:07:53.236 "trid": { 00:07:53.236 "trtype": "RDMA", 00:07:53.236 "adrfam": "IPv4", 00:07:53.236 "traddr": "192.168.100.8", 00:07:53.236 "trsvcid": "4420", 00:07:53.236 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:53.236 }, 00:07:53.236 "ctrlr_data": { 00:07:53.236 "cntlid": 1, 00:07:53.236 "vendor_id": "0x8086", 00:07:53.236 "model_number": "SPDK bdev Controller", 00:07:53.236 "serial_number": "SPDK0", 00:07:53.236 "firmware_revision": "25.01", 00:07:53.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.236 "oacs": { 00:07:53.236 "security": 0, 00:07:53.236 "format": 0, 00:07:53.236 "firmware": 0, 00:07:53.236 "ns_manage": 0 00:07:53.236 }, 00:07:53.236 "multi_ctrlr": true, 00:07:53.236 "ana_reporting": false 00:07:53.236 }, 00:07:53.236 "vs": { 00:07:53.236 "nvme_version": "1.3" 00:07:53.236 }, 00:07:53.236 "ns_data": { 00:07:53.236 "id": 1, 00:07:53.236 "can_share": true 00:07:53.236 } 00:07:53.236 } 00:07:53.236 ], 00:07:53.236 "mp_policy": "active_passive" 00:07:53.236 } 00:07:53.236 } 00:07:53.236 ] 00:07:53.236 17:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3388787 00:07:53.236 17:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:53.236 17:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:53.236 Running I/O for 10 seconds... 00:07:54.226 Latency(us) 00:07:54.226 [2024-11-20T16:53:12.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.226 Nvme0n1 : 1.00 25474.00 99.51 0.00 0.00 0.00 0.00 0.00 00:07:54.226 [2024-11-20T16:53:12.762Z] =================================================================================================================== 00:07:54.226 [2024-11-20T16:53:12.762Z] Total : 25474.00 99.51 0.00 0.00 0.00 0.00 0.00 00:07:54.226 00:07:55.169 17:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 592ffca4-2339-4321-990f-7045fac810f4 00:07:55.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.169 Nvme0n1 : 2.00 25759.50 100.62 0.00 0.00 0.00 0.00 0.00 00:07:55.169 [2024-11-20T16:53:13.706Z] =================================================================================================================== 00:07:55.170 [2024-11-20T16:53:13.706Z] Total : 25759.50 100.62 0.00 0.00 0.00 0.00 0.00 00:07:55.170 00:07:55.430 true 00:07:55.430 17:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:55.430 17:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:07:55.430 17:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:55.430 17:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:55.430 17:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3388787 00:07:56.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.374 Nvme0n1 : 3.00 25867.33 101.04 0.00 0.00 0.00 0.00 0.00 00:07:56.374 [2024-11-20T16:53:14.910Z] =================================================================================================================== 00:07:56.374 [2024-11-20T16:53:14.910Z] Total : 25867.33 101.04 0.00 0.00 0.00 0.00 0.00 00:07:56.374 00:07:57.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.316 Nvme0n1 : 4.00 25944.00 101.34 0.00 0.00 0.00 0.00 0.00 00:07:57.316 [2024-11-20T16:53:15.852Z] =================================================================================================================== 00:07:57.316 [2024-11-20T16:53:15.852Z] Total : 25944.00 101.34 0.00 0.00 0.00 0.00 0.00 00:07:57.316 00:07:58.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.259 Nvme0n1 : 5.00 25996.60 101.55 0.00 0.00 0.00 0.00 0.00 00:07:58.259 [2024-11-20T16:53:16.795Z] =================================================================================================================== 00:07:58.259 [2024-11-20T16:53:16.795Z] Total : 25996.60 101.55 0.00 0.00 0.00 0.00 0.00 00:07:58.259 00:07:59.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.202 Nvme0n1 : 6.00 26031.83 101.69 0.00 0.00 0.00 0.00 0.00 00:07:59.202 [2024-11-20T16:53:17.738Z] =================================================================================================================== 00:07:59.202 [2024-11-20T16:53:17.738Z] Total : 26031.83 101.69 0.00 0.00 0.00 0.00 0.00 00:07:59.202 00:08:00.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.588 Nvme0n1 : 7.00 26057.43 101.79 0.00 0.00 0.00 0.00 0.00 00:08:00.588 [2024-11-20T16:53:19.124Z] =================================================================================================================== 00:08:00.588 [2024-11-20T16:53:19.124Z] Total : 26057.43 101.79 0.00 0.00 0.00 0.00 0.00 00:08:00.588 00:08:01.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.168 Nvme0n1 : 8.00 26083.88 101.89 0.00 0.00 0.00 0.00 0.00 00:08:01.168 [2024-11-20T16:53:19.704Z] =================================================================================================================== 00:08:01.168 [2024-11-20T16:53:19.704Z] Total : 26083.88 101.89 0.00 0.00 0.00 0.00 0.00 00:08:01.168 00:08:02.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.554 Nvme0n1 : 9.00 26094.11 101.93 0.00 0.00 0.00 0.00 0.00 00:08:02.554 [2024-11-20T16:53:21.090Z] =================================================================================================================== 00:08:02.554 [2024-11-20T16:53:21.090Z] Total : 26094.11 101.93 0.00 0.00 0.00 0.00 0.00 00:08:02.554 00:08:03.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.495 Nvme0n1 : 10.00 26108.60 101.99 0.00 0.00 0.00 0.00 0.00 00:08:03.495 [2024-11-20T16:53:22.031Z] =================================================================================================================== 00:08:03.495 [2024-11-20T16:53:22.031Z] Total : 26108.60 101.99 0.00 0.00 0.00 0.00 0.00 00:08:03.495 00:08:03.495 00:08:03.495 Latency(us) 00:08:03.495 [2024-11-20T16:53:22.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.495 Nvme0n1 : 10.00 26108.71 101.99 0.00 0.00 4899.18 3659.09 18786.99 00:08:03.495 [2024-11-20T16:53:22.032Z] =================================================================================================================== 00:08:03.496 [2024-11-20T16:53:22.032Z] Total : 26108.71 101.99 0.00 0.00 4899.18 3659.09 18786.99 00:08:03.496 { 00:08:03.496 "results": [ 00:08:03.496 { 00:08:03.496 "job": "Nvme0n1", 00:08:03.496 "core_mask": "0x2", 00:08:03.496 "workload": "randwrite", 00:08:03.496 "status": "finished", 00:08:03.496 "queue_depth": 128, 00:08:03.496 "io_size": 4096, 00:08:03.496 "runtime": 10.004859, 00:08:03.496 "iops": 26108.71377597625, 00:08:03.496 "mibps": 101.98716318740723, 00:08:03.496 "io_failed": 0, 00:08:03.496 "io_timeout": 0, 00:08:03.496 "avg_latency_us": 4899.181923174102, 00:08:03.496 "min_latency_us": 3659.0933333333332, 00:08:03.496 "max_latency_us": 18786.986666666668 00:08:03.496 } 00:08:03.496 ], 00:08:03.496 "core_count": 1 00:08:03.496 } 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3388646 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3388646 ']' 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3388646 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3388646 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3388646' 00:08:03.496 killing process with pid 3388646 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3388646 00:08:03.496 Received shutdown signal, test time was about 10.000000 seconds 00:08:03.496 00:08:03.496 Latency(us) 00:08:03.496 [2024-11-20T16:53:22.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.496 [2024-11-20T16:53:22.032Z] =================================================================================================================== 00:08:03.496 [2024-11-20T16:53:22.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3388646 00:08:03.496 17:53:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:03.757 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:03.757 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:08:03.757 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3384853 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3384853 00:08:04.018 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3384853 Killed "${NVMF_APP[@]}" "$@" 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3391036 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3391036 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3391036 ']' 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.018 17:53:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.018 [2024-11-20 17:53:22.528682] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:04.018 [2024-11-20 17:53:22.528740] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.279 [2024-11-20 17:53:22.607866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.279 [2024-11-20 17:53:22.643369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.279 [2024-11-20 17:53:22.643402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.279 [2024-11-20 17:53:22.643410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.279 [2024-11-20 17:53:22.643416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.279 [2024-11-20 17:53:22.643422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.279 [2024-11-20 17:53:22.643982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.850 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.850 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:04.850 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.850 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.850 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.850 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.850 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.111 [2024-11-20 17:53:23.507585] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:05.111 [2024-11-20 17:53:23.507678] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:05.111 [2024-11-20 17:53:23.507709] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:05.111 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:05.111 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b7809c5d-7b09-462d-bd36-077abd909ace 00:08:05.111 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b7809c5d-7b09-462d-bd36-077abd909ace 00:08:05.111 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.111 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:05.111 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.111 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.112 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:05.373 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b7809c5d-7b09-462d-bd36-077abd909ace -t 2000 00:08:05.373 [ 00:08:05.373 { 00:08:05.373 "name": "b7809c5d-7b09-462d-bd36-077abd909ace", 00:08:05.373 "aliases": [ 00:08:05.373 "lvs/lvol" 00:08:05.373 ], 00:08:05.373 "product_name": "Logical Volume", 00:08:05.373 "block_size": 4096, 00:08:05.373 "num_blocks": 38912, 00:08:05.373 "uuid": "b7809c5d-7b09-462d-bd36-077abd909ace", 00:08:05.373 "assigned_rate_limits": { 00:08:05.373 "rw_ios_per_sec": 0, 00:08:05.373 "rw_mbytes_per_sec": 0, 00:08:05.373 "r_mbytes_per_sec": 0, 00:08:05.373 "w_mbytes_per_sec": 0 00:08:05.373 }, 00:08:05.373 "claimed": false, 00:08:05.373 "zoned": false, 00:08:05.373 "supported_io_types": { 00:08:05.373 "read": true, 00:08:05.373 "write": true, 00:08:05.373 "unmap": true, 00:08:05.373 "flush": false, 00:08:05.373 "reset": true, 00:08:05.373 "nvme_admin": false, 00:08:05.373 "nvme_io": false, 00:08:05.373 "nvme_io_md": false, 00:08:05.373 "write_zeroes": true, 00:08:05.373 "zcopy": false, 00:08:05.373 "get_zone_info": false, 00:08:05.373 "zone_management": false, 00:08:05.373 "zone_append": false, 00:08:05.373 "compare": false, 00:08:05.373 "compare_and_write": false, 00:08:05.373 "abort": false, 00:08:05.373 "seek_hole": true, 00:08:05.373 "seek_data": true, 00:08:05.373 "copy": false, 00:08:05.373 "nvme_iov_md": false 00:08:05.373 }, 00:08:05.373 "driver_specific": { 00:08:05.373 "lvol": { 00:08:05.373 "lvol_store_uuid": "592ffca4-2339-4321-990f-7045fac810f4", 00:08:05.373 "base_bdev": "aio_bdev", 00:08:05.373 "thin_provision": false, 00:08:05.373 "num_allocated_clusters": 38, 00:08:05.373 "snapshot": false, 00:08:05.373 "clone": false, 00:08:05.373 "esnap_clone": false 00:08:05.373 } 00:08:05.373 } 00:08:05.373 } 00:08:05.373 ] 00:08:05.373 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:05.373 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:08:05.373 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:05.634 17:53:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:05.634 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:08:05.634 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:05.634 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.896 [2024-11-20 17:53:24.319680] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:05.896 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:08:06.157 request: 00:08:06.157 { 00:08:06.157 "uuid": "592ffca4-2339-4321-990f-7045fac810f4", 00:08:06.157 "method": "bdev_lvol_get_lvstores", 00:08:06.157 "req_id": 1 00:08:06.157 } 00:08:06.157 Got JSON-RPC error response 00:08:06.157 response: 00:08:06.157 { 00:08:06.157 "code": -19, 00:08:06.157 "message": "No such device" 00:08:06.157 } 00:08:06.157 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:06.157 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.157 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:06.157 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.157 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.157 aio_bdev 00:08:06.418 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b7809c5d-7b09-462d-bd36-077abd909ace 00:08:06.418 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b7809c5d-7b09-462d-bd36-077abd909ace 00:08:06.418 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.418 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:06.418 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.418 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.418 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.418 17:53:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b7809c5d-7b09-462d-bd36-077abd909ace -t 2000 00:08:06.680 [ 00:08:06.680 { 00:08:06.680 "name": "b7809c5d-7b09-462d-bd36-077abd909ace", 00:08:06.680 "aliases": [ 00:08:06.680 "lvs/lvol" 00:08:06.680 ], 00:08:06.680 "product_name": "Logical Volume", 00:08:06.680 "block_size": 4096, 00:08:06.680 "num_blocks": 38912, 00:08:06.680 "uuid": "b7809c5d-7b09-462d-bd36-077abd909ace", 00:08:06.680 "assigned_rate_limits": { 00:08:06.680 "rw_ios_per_sec": 0, 00:08:06.680 "rw_mbytes_per_sec": 0, 00:08:06.680 "r_mbytes_per_sec": 0, 00:08:06.680 "w_mbytes_per_sec": 0 00:08:06.680 }, 00:08:06.680 "claimed": false, 00:08:06.680 "zoned": false, 00:08:06.680 "supported_io_types": { 00:08:06.680 "read": true, 00:08:06.680 "write": true, 00:08:06.680 "unmap": true, 00:08:06.680 "flush": false, 00:08:06.680 "reset": true, 00:08:06.680 "nvme_admin": false, 00:08:06.680 "nvme_io": false, 00:08:06.680 "nvme_io_md": false, 00:08:06.680 "write_zeroes": true, 00:08:06.680 "zcopy": false, 00:08:06.680 "get_zone_info": false, 00:08:06.680 "zone_management": false, 00:08:06.680 "zone_append": false, 00:08:06.680 "compare": false, 00:08:06.680 "compare_and_write": false, 00:08:06.680 "abort": false, 00:08:06.680 "seek_hole": true, 00:08:06.680 "seek_data": true, 00:08:06.680 "copy": false, 00:08:06.680 "nvme_iov_md": false 00:08:06.680 }, 00:08:06.680 "driver_specific": { 00:08:06.680 "lvol": { 00:08:06.680 "lvol_store_uuid": "592ffca4-2339-4321-990f-7045fac810f4", 00:08:06.680 "base_bdev": "aio_bdev", 00:08:06.680 "thin_provision": false, 00:08:06.680 "num_allocated_clusters": 38, 00:08:06.680 "snapshot": false, 00:08:06.680 "clone": false, 00:08:06.680 "esnap_clone": false 00:08:06.680 } 00:08:06.680 } 00:08:06.680 } 00:08:06.680 ] 00:08:06.680 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:06.680 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:08:06.680 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:06.942 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:06.942 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 592ffca4-2339-4321-990f-7045fac810f4 00:08:06.942 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:06.942 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:06.942 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b7809c5d-7b09-462d-bd36-077abd909ace 00:08:07.204 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 592ffca4-2339-4321-990f-7045fac810f4 00:08:07.486 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.486 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.486 00:08:07.486 real 0m17.365s 00:08:07.486 user 0m45.991s 00:08:07.486 sys 0m2.403s 00:08:07.486 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.486 17:53:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:07.486 ************************************ 00:08:07.486 END TEST lvs_grow_dirty 00:08:07.486 ************************************ 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:07.769 nvmf_trace.0 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:07.769 rmmod nvme_rdma 00:08:07.769 rmmod nvme_fabrics 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:07.769 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3391036 ']' 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3391036 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3391036 ']' 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3391036 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3391036 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3391036' 00:08:07.770 killing process with pid 3391036 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3391036 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3391036 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:07.770 00:08:07.770 real 0m42.914s 00:08:07.770 user 1m8.188s 00:08:07.770 sys 0m9.897s 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.770 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.770 ************************************ 00:08:07.770 END TEST nvmf_lvs_grow 00:08:07.770 ************************************ 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.108 ************************************ 00:08:08.108 START TEST nvmf_bdev_io_wait 00:08:08.108 ************************************ 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:08.108 * Looking for test storage... 00:08:08.108 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:08.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.108 --rc genhtml_branch_coverage=1 00:08:08.108 --rc genhtml_function_coverage=1 00:08:08.108 --rc genhtml_legend=1 00:08:08.108 --rc geninfo_all_blocks=1 00:08:08.108 --rc geninfo_unexecuted_blocks=1 00:08:08.108 00:08:08.108 ' 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:08.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.108 --rc genhtml_branch_coverage=1 00:08:08.108 --rc genhtml_function_coverage=1 00:08:08.108 --rc genhtml_legend=1 00:08:08.108 --rc geninfo_all_blocks=1 00:08:08.108 --rc geninfo_unexecuted_blocks=1 00:08:08.108 00:08:08.108 ' 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:08.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.108 --rc genhtml_branch_coverage=1 00:08:08.108 --rc genhtml_function_coverage=1 00:08:08.108 --rc genhtml_legend=1 00:08:08.108 --rc geninfo_all_blocks=1 00:08:08.108 --rc geninfo_unexecuted_blocks=1 00:08:08.108 00:08:08.108 ' 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:08.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.108 --rc genhtml_branch_coverage=1 00:08:08.108 --rc genhtml_function_coverage=1 00:08:08.108 --rc genhtml_legend=1 00:08:08.108 --rc geninfo_all_blocks=1 00:08:08.108 --rc geninfo_unexecuted_blocks=1 00:08:08.108 00:08:08.108 ' 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:08.108 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.109 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.371 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:08.371 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:08.371 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.371 17:53:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:16.513 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:16.513 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.513 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:16.514 Found net devices under 0000:98:00.0: mlx_0_0 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:16.514 Found net devices under 0000:98:00.1: mlx_0_1 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:16.514 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.514 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:16.514 altname enp152s0f0np0 00:08:16.514 altname ens817f0np0 00:08:16.514 inet 192.168.100.8/24 scope global mlx_0_0 00:08:16.514 valid_lft forever preferred_lft forever 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:16.514 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.514 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:16.514 altname enp152s0f1np1 00:08:16.514 altname ens817f1np1 00:08:16.514 inet 192.168.100.9/24 scope global mlx_0_1 00:08:16.514 valid_lft forever preferred_lft forever 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:16.514 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:16.515 192.168.100.9' 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:16.515 192.168.100.9' 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:16.515 192.168.100.9' 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3396143 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3396143 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3396143 ']' 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.515 17:53:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.515 [2024-11-20 17:53:34.983098] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:16.515 [2024-11-20 17:53:34.983155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.776 [2024-11-20 17:53:35.060770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.776 [2024-11-20 17:53:35.097587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.776 [2024-11-20 17:53:35.097620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.776 [2024-11-20 17:53:35.097631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.776 [2024-11-20 17:53:35.097638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.776 [2024-11-20 17:53:35.097643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.776 [2024-11-20 17:53:35.101875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.776 [2024-11-20 17:53:35.102060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.776 [2024-11-20 17:53:35.102200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.776 [2024-11-20 17:53:35.102200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.776 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.776 [2024-11-20 17:53:35.280695] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24a5f20/0x24aa410) succeed. 00:08:16.776 [2024-11-20 17:53:35.295049] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24a75b0/0x24ebab0) succeed. 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.037 Malloc0 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.037 [2024-11-20 17:53:35.471852] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3396205 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3396208 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.037 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.037 { 00:08:17.037 "params": { 00:08:17.038 "name": "Nvme$subsystem", 00:08:17.038 "trtype": "$TEST_TRANSPORT", 00:08:17.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.038 "adrfam": "ipv4", 00:08:17.038 "trsvcid": "$NVMF_PORT", 00:08:17.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.038 "hdgst": ${hdgst:-false}, 00:08:17.038 "ddgst": ${ddgst:-false} 00:08:17.038 }, 00:08:17.038 "method": "bdev_nvme_attach_controller" 00:08:17.038 } 00:08:17.038 EOF 00:08:17.038 )") 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3396210 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.038 { 00:08:17.038 "params": { 00:08:17.038 "name": "Nvme$subsystem", 00:08:17.038 "trtype": "$TEST_TRANSPORT", 00:08:17.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.038 "adrfam": "ipv4", 00:08:17.038 "trsvcid": "$NVMF_PORT", 00:08:17.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.038 "hdgst": ${hdgst:-false}, 00:08:17.038 "ddgst": ${ddgst:-false} 00:08:17.038 }, 00:08:17.038 "method": "bdev_nvme_attach_controller" 00:08:17.038 } 00:08:17.038 EOF 00:08:17.038 )") 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3396214 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.038 { 00:08:17.038 "params": { 00:08:17.038 "name": "Nvme$subsystem", 00:08:17.038 "trtype": "$TEST_TRANSPORT", 00:08:17.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.038 "adrfam": "ipv4", 00:08:17.038 "trsvcid": "$NVMF_PORT", 00:08:17.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.038 "hdgst": ${hdgst:-false}, 00:08:17.038 "ddgst": ${ddgst:-false} 00:08:17.038 }, 00:08:17.038 "method": "bdev_nvme_attach_controller" 00:08:17.038 } 00:08:17.038 EOF 00:08:17.038 )") 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.038 { 00:08:17.038 "params": { 00:08:17.038 "name": "Nvme$subsystem", 00:08:17.038 "trtype": "$TEST_TRANSPORT", 00:08:17.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.038 "adrfam": "ipv4", 00:08:17.038 "trsvcid": "$NVMF_PORT", 00:08:17.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.038 "hdgst": ${hdgst:-false}, 00:08:17.038 "ddgst": ${ddgst:-false} 00:08:17.038 }, 00:08:17.038 "method": "bdev_nvme_attach_controller" 00:08:17.038 } 00:08:17.038 EOF 00:08:17.038 )") 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3396205 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.038 "params": { 00:08:17.038 "name": "Nvme1", 00:08:17.038 "trtype": "rdma", 00:08:17.038 "traddr": "192.168.100.8", 00:08:17.038 "adrfam": "ipv4", 00:08:17.038 "trsvcid": "4420", 00:08:17.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.038 "hdgst": false, 00:08:17.038 "ddgst": false 00:08:17.038 }, 00:08:17.038 "method": "bdev_nvme_attach_controller" 00:08:17.038 }' 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.038 "params": { 00:08:17.038 "name": "Nvme1", 00:08:17.038 "trtype": "rdma", 00:08:17.038 "traddr": "192.168.100.8", 00:08:17.038 "adrfam": "ipv4", 00:08:17.038 "trsvcid": "4420", 00:08:17.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.038 "hdgst": false, 00:08:17.038 "ddgst": false 00:08:17.038 }, 00:08:17.038 "method": "bdev_nvme_attach_controller" 00:08:17.038 }' 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.038 "params": { 00:08:17.038 "name": "Nvme1", 00:08:17.038 "trtype": "rdma", 00:08:17.038 "traddr": "192.168.100.8", 00:08:17.038 "adrfam": "ipv4", 00:08:17.038 "trsvcid": "4420", 00:08:17.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.038 "hdgst": false, 00:08:17.038 "ddgst": false 00:08:17.038 }, 00:08:17.038 "method": "bdev_nvme_attach_controller" 00:08:17.038 }' 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:17.038 17:53:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.038 "params": { 00:08:17.038 "name": "Nvme1", 00:08:17.038 "trtype": "rdma", 00:08:17.038 "traddr": "192.168.100.8", 00:08:17.038 "adrfam": "ipv4", 00:08:17.038 "trsvcid": "4420", 00:08:17.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.038 "hdgst": false, 00:08:17.038 "ddgst": false 00:08:17.038 }, 00:08:17.038 "method": "bdev_nvme_attach_controller" 00:08:17.038 }' 00:08:17.038 [2024-11-20 17:53:35.524621] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:17.038 [2024-11-20 17:53:35.524674] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:17.038 [2024-11-20 17:53:35.526887] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:17.038 [2024-11-20 17:53:35.526932] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:17.038 [2024-11-20 17:53:35.527991] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:17.038 [2024-11-20 17:53:35.527994] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:17.038 [2024-11-20 17:53:35.528038] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 17:53:35.528040] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:17.038 --proc-type=auto ] 00:08:17.299 [2024-11-20 17:53:35.691591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.299 [2024-11-20 17:53:35.720406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:17.299 [2024-11-20 17:53:35.749682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.299 [2024-11-20 17:53:35.779142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:17.299 [2024-11-20 17:53:35.801701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.299 [2024-11-20 17:53:35.830482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:17.559 [2024-11-20 17:53:35.845290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.559 [2024-11-20 17:53:35.873531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:17.559 Running I/O for 1 seconds... 00:08:17.559 Running I/O for 1 seconds... 00:08:17.559 Running I/O for 1 seconds... 00:08:17.559 Running I/O for 1 seconds... 00:08:18.501 19728.00 IOPS, 77.06 MiB/s 00:08:18.501 Latency(us) 00:08:18.501 [2024-11-20T16:53:37.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.501 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:18.501 Nvme1n1 : 1.01 19811.01 77.39 0.00 0.00 6444.29 2949.12 15400.96 00:08:18.501 [2024-11-20T16:53:37.037Z] =================================================================================================================== 00:08:18.501 [2024-11-20T16:53:37.037Z] Total : 19811.01 77.39 0.00 0.00 6444.29 2949.12 15400.96 00:08:18.501 23713.00 IOPS, 92.63 MiB/s 00:08:18.501 Latency(us) 00:08:18.501 [2024-11-20T16:53:37.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.501 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:18.501 Nvme1n1 : 1.01 23770.49 92.85 0.00 0.00 5370.43 2798.93 13434.88 00:08:18.501 [2024-11-20T16:53:37.037Z] =================================================================================================================== 00:08:18.501 [2024-11-20T16:53:37.037Z] Total : 23770.49 92.85 0.00 0.00 5370.43 2798.93 13434.88 00:08:18.501 18895.00 IOPS, 73.81 MiB/s 00:08:18.501 Latency(us) 00:08:18.501 [2024-11-20T16:53:37.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.501 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:18.501 Nvme1n1 : 1.01 18972.88 74.11 0.00 0.00 6728.37 3112.96 16602.45 00:08:18.501 [2024-11-20T16:53:37.037Z] =================================================================================================================== 00:08:18.501 [2024-11-20T16:53:37.037Z] Total : 18972.88 74.11 0.00 0.00 6728.37 3112.96 16602.45 00:08:18.501 183760.00 IOPS, 717.81 MiB/s 00:08:18.501 Latency(us) 00:08:18.501 [2024-11-20T16:53:37.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.501 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:18.501 Nvme1n1 : 1.00 183392.96 716.38 0.00 0.00 693.74 302.08 2498.56 00:08:18.501 [2024-11-20T16:53:37.037Z] =================================================================================================================== 00:08:18.501 [2024-11-20T16:53:37.037Z] Total : 183392.96 716.38 0.00 0.00 693.74 302.08 2498.56 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3396208 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3396210 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3396214 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:18.762 rmmod nvme_rdma 00:08:18.762 rmmod nvme_fabrics 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3396143 ']' 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3396143 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3396143 ']' 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3396143 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3396143 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3396143' 00:08:18.762 killing process with pid 3396143 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3396143 00:08:18.762 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3396143 00:08:19.022 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.022 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:19.022 00:08:19.022 real 0m11.097s 00:08:19.022 user 0m16.909s 00:08:19.022 sys 0m7.393s 00:08:19.023 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.023 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.023 ************************************ 00:08:19.023 END TEST nvmf_bdev_io_wait 00:08:19.023 ************************************ 00:08:19.023 17:53:37 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:19.023 17:53:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.023 17:53:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.023 17:53:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.023 ************************************ 00:08:19.023 START TEST nvmf_queue_depth 00:08:19.023 ************************************ 00:08:19.023 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:19.284 * Looking for test storage... 00:08:19.284 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.284 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.285 --rc genhtml_branch_coverage=1 00:08:19.285 --rc genhtml_function_coverage=1 00:08:19.285 --rc genhtml_legend=1 00:08:19.285 --rc geninfo_all_blocks=1 00:08:19.285 --rc geninfo_unexecuted_blocks=1 00:08:19.285 00:08:19.285 ' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.285 --rc genhtml_branch_coverage=1 00:08:19.285 --rc genhtml_function_coverage=1 00:08:19.285 --rc genhtml_legend=1 00:08:19.285 --rc geninfo_all_blocks=1 00:08:19.285 --rc geninfo_unexecuted_blocks=1 00:08:19.285 00:08:19.285 ' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.285 --rc genhtml_branch_coverage=1 00:08:19.285 --rc genhtml_function_coverage=1 00:08:19.285 --rc genhtml_legend=1 00:08:19.285 --rc geninfo_all_blocks=1 00:08:19.285 --rc geninfo_unexecuted_blocks=1 00:08:19.285 00:08:19.285 ' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.285 --rc genhtml_branch_coverage=1 00:08:19.285 --rc genhtml_function_coverage=1 00:08:19.285 --rc genhtml_legend=1 00:08:19.285 --rc geninfo_all_blocks=1 00:08:19.285 --rc geninfo_unexecuted_blocks=1 00:08:19.285 00:08:19.285 ' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.285 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.285 17:53:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:27.427 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:27.428 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:27.428 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:27.428 Found net devices under 0000:98:00.0: mlx_0_0 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:27.428 Found net devices under 0000:98:00.1: mlx_0_1 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:27.428 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:27.428 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:27.428 altname enp152s0f0np0 00:08:27.428 altname ens817f0np0 00:08:27.428 inet 192.168.100.8/24 scope global mlx_0_0 00:08:27.428 valid_lft forever preferred_lft forever 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:27.428 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:27.428 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:27.428 altname enp152s0f1np1 00:08:27.428 altname ens817f1np1 00:08:27.428 inet 192.168.100.9/24 scope global mlx_0_1 00:08:27.428 valid_lft forever preferred_lft forever 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:27.428 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:27.429 192.168.100.9' 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:27.429 192.168.100.9' 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:27.429 192.168.100.9' 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3400898 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3400898 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3400898 ']' 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.429 17:53:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.429 [2024-11-20 17:53:45.944206] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:27.429 [2024-11-20 17:53:45.944277] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.690 [2024-11-20 17:53:46.048930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.690 [2024-11-20 17:53:46.100014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.690 [2024-11-20 17:53:46.100063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.690 [2024-11-20 17:53:46.100071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.690 [2024-11-20 17:53:46.100079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.690 [2024-11-20 17:53:46.100085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.690 [2024-11-20 17:53:46.100884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.262 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.262 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:28.262 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.262 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.262 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 [2024-11-20 17:53:46.866132] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bc4f90/0x1bc9480) succeed. 00:08:28.523 [2024-11-20 17:53:46.879749] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bc6440/0x1c0ab20) succeed. 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 Malloc0 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 [2024-11-20 17:53:46.985429] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3401231 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3401231 /var/tmp/bdevperf.sock 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3401231 ']' 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.523 17:53:46 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 [2024-11-20 17:53:47.040587] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:28.523 [2024-11-20 17:53:47.040649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401231 ] 00:08:28.785 [2024-11-20 17:53:47.123451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.785 [2024-11-20 17:53:47.164752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.355 17:53:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.355 17:53:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:29.355 17:53:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:29.355 17:53:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.355 17:53:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.616 NVMe0n1 00:08:29.616 17:53:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.616 17:53:47 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.616 Running I/O for 10 seconds... 00:08:31.512 12288.00 IOPS, 48.00 MiB/s [2024-11-20T16:53:51.434Z] 13723.00 IOPS, 53.61 MiB/s [2024-11-20T16:53:52.378Z] 14332.33 IOPS, 55.99 MiB/s [2024-11-20T16:53:53.321Z] 14592.00 IOPS, 57.00 MiB/s [2024-11-20T16:53:54.265Z] 14745.60 IOPS, 57.60 MiB/s [2024-11-20T16:53:55.209Z] 14848.00 IOPS, 58.00 MiB/s [2024-11-20T16:53:56.159Z] 14921.14 IOPS, 58.29 MiB/s [2024-11-20T16:53:57.102Z] 14987.75 IOPS, 58.55 MiB/s [2024-11-20T16:53:58.486Z] 15043.33 IOPS, 58.76 MiB/s [2024-11-20T16:53:58.486Z] 15095.20 IOPS, 58.97 MiB/s 00:08:39.950 Latency(us) 00:08:39.950 [2024-11-20T16:53:58.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.950 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:39.950 Verification LBA range: start 0x0 length 0x4000 00:08:39.950 NVMe0n1 : 10.03 15132.73 59.11 0.00 0.00 67454.80 5079.04 48496.64 00:08:39.950 [2024-11-20T16:53:58.486Z] =================================================================================================================== 00:08:39.950 [2024-11-20T16:53:58.486Z] Total : 15132.73 59.11 0.00 0.00 67454.80 5079.04 48496.64 00:08:39.950 { 00:08:39.950 "results": [ 00:08:39.950 { 00:08:39.950 "job": "NVMe0n1", 00:08:39.950 "core_mask": "0x1", 00:08:39.950 "workload": "verify", 00:08:39.950 "status": "finished", 00:08:39.950 "verify_range": { 00:08:39.950 "start": 0, 00:08:39.950 "length": 16384 00:08:39.950 }, 00:08:39.950 "queue_depth": 1024, 00:08:39.950 "io_size": 4096, 00:08:39.950 "runtime": 10.03137, 00:08:39.950 "iops": 15132.72863028679, 00:08:39.950 "mibps": 59.112221212057776, 00:08:39.950 "io_failed": 0, 00:08:39.950 "io_timeout": 0, 00:08:39.950 "avg_latency_us": 67454.80321295724, 00:08:39.950 "min_latency_us": 5079.04, 00:08:39.950 "max_latency_us": 48496.64 00:08:39.950 } 00:08:39.950 ], 00:08:39.950 "core_count": 1 00:08:39.950 } 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3401231 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3401231 ']' 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3401231 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3401231 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3401231' 00:08:39.950 killing process with pid 3401231 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3401231 00:08:39.950 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.950 00:08:39.950 Latency(us) 00:08:39.950 [2024-11-20T16:53:58.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.950 [2024-11-20T16:53:58.486Z] =================================================================================================================== 00:08:39.950 [2024-11-20T16:53:58.486Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3401231 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:39.950 rmmod nvme_rdma 00:08:39.950 rmmod nvme_fabrics 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3400898 ']' 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3400898 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3400898 ']' 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3400898 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3400898 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3400898' 00:08:39.950 killing process with pid 3400898 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3400898 00:08:39.950 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3400898 00:08:40.213 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:40.213 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:40.213 00:08:40.213 real 0m21.045s 00:08:40.213 user 0m26.579s 00:08:40.213 sys 0m6.802s 00:08:40.213 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.213 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.213 ************************************ 00:08:40.213 END TEST nvmf_queue_depth 00:08:40.213 ************************************ 00:08:40.213 17:53:58 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:40.213 17:53:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.213 17:53:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.213 17:53:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.213 ************************************ 00:08:40.213 START TEST nvmf_target_multipath 00:08:40.213 ************************************ 00:08:40.213 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:40.213 * Looking for test storage... 00:08:40.476 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.476 --rc genhtml_branch_coverage=1 00:08:40.476 --rc genhtml_function_coverage=1 00:08:40.476 --rc genhtml_legend=1 00:08:40.476 --rc geninfo_all_blocks=1 00:08:40.476 --rc geninfo_unexecuted_blocks=1 00:08:40.476 00:08:40.476 ' 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.476 --rc genhtml_branch_coverage=1 00:08:40.476 --rc genhtml_function_coverage=1 00:08:40.476 --rc genhtml_legend=1 00:08:40.476 --rc geninfo_all_blocks=1 00:08:40.476 --rc geninfo_unexecuted_blocks=1 00:08:40.476 00:08:40.476 ' 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.476 --rc genhtml_branch_coverage=1 00:08:40.476 --rc genhtml_function_coverage=1 00:08:40.476 --rc genhtml_legend=1 00:08:40.476 --rc geninfo_all_blocks=1 00:08:40.476 --rc geninfo_unexecuted_blocks=1 00:08:40.476 00:08:40.476 ' 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.476 --rc genhtml_branch_coverage=1 00:08:40.476 --rc genhtml_function_coverage=1 00:08:40.476 --rc genhtml_legend=1 00:08:40.476 --rc geninfo_all_blocks=1 00:08:40.476 --rc geninfo_unexecuted_blocks=1 00:08:40.476 00:08:40.476 ' 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.476 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.477 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:40.477 17:53:58 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:48.629 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:48.629 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:48.629 Found net devices under 0000:98:00.0: mlx_0_0 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:48.629 Found net devices under 0000:98:00.1: mlx_0_1 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.629 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:48.630 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.630 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:48.630 altname enp152s0f0np0 00:08:48.630 altname ens817f0np0 00:08:48.630 inet 192.168.100.8/24 scope global mlx_0_0 00:08:48.630 valid_lft forever preferred_lft forever 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:48.630 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.630 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:48.630 altname enp152s0f1np1 00:08:48.630 altname ens817f1np1 00:08:48.630 inet 192.168.100.9/24 scope global mlx_0_1 00:08:48.630 valid_lft forever preferred_lft forever 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:48.630 192.168.100.9' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:48.630 192.168.100.9' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:48.630 192.168.100.9' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:08:48.630 run this test only with TCP transport for now 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:48.630 rmmod nvme_rdma 00:08:48.630 rmmod nvme_fabrics 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:48.630 00:08:48.630 real 0m8.064s 00:08:48.630 user 0m2.130s 00:08:48.630 sys 0m5.997s 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:48.630 ************************************ 00:08:48.630 END TEST nvmf_target_multipath 00:08:48.630 ************************************ 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.630 ************************************ 00:08:48.630 START TEST nvmf_zcopy 00:08:48.630 ************************************ 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:08:48.630 * Looking for test storage... 00:08:48.630 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.630 17:54:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.630 --rc genhtml_branch_coverage=1 00:08:48.630 --rc genhtml_function_coverage=1 00:08:48.630 --rc genhtml_legend=1 00:08:48.630 --rc geninfo_all_blocks=1 00:08:48.630 --rc geninfo_unexecuted_blocks=1 00:08:48.630 00:08:48.630 ' 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.630 --rc genhtml_branch_coverage=1 00:08:48.630 --rc genhtml_function_coverage=1 00:08:48.630 --rc genhtml_legend=1 00:08:48.630 --rc geninfo_all_blocks=1 00:08:48.630 --rc geninfo_unexecuted_blocks=1 00:08:48.630 00:08:48.630 ' 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.630 --rc genhtml_branch_coverage=1 00:08:48.630 --rc genhtml_function_coverage=1 00:08:48.630 --rc genhtml_legend=1 00:08:48.630 --rc geninfo_all_blocks=1 00:08:48.630 --rc geninfo_unexecuted_blocks=1 00:08:48.630 00:08:48.630 ' 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.630 --rc genhtml_branch_coverage=1 00:08:48.630 --rc genhtml_function_coverage=1 00:08:48.630 --rc genhtml_legend=1 00:08:48.630 --rc geninfo_all_blocks=1 00:08:48.630 --rc geninfo_unexecuted_blocks=1 00:08:48.630 00:08:48.630 ' 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:48.630 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.631 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.631 17:54:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.843 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.843 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:56.843 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:56.843 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:56.843 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:56.843 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:56.844 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:56.844 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:56.844 Found net devices under 0000:98:00.0: mlx_0_0 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:56.844 Found net devices under 0000:98:00.1: mlx_0_1 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:56.844 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:56.845 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:56.845 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:56.845 altname enp152s0f0np0 00:08:56.845 altname ens817f0np0 00:08:56.845 inet 192.168.100.8/24 scope global mlx_0_0 00:08:56.845 valid_lft forever preferred_lft forever 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:56.845 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:56.845 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:56.845 altname enp152s0f1np1 00:08:56.845 altname ens817f1np1 00:08:56.845 inet 192.168.100.9/24 scope global mlx_0_1 00:08:56.845 valid_lft forever preferred_lft forever 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:56.845 192.168.100.9' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:56.845 192.168.100.9' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:56.845 192.168.100.9' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3412527 00:08:56.845 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3412527 00:08:56.846 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:56.846 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3412527 ']' 00:08:56.846 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.846 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.846 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.846 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.846 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.108 [2024-11-20 17:54:15.383954] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:57.108 [2024-11-20 17:54:15.384031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.108 [2024-11-20 17:54:15.486233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.108 [2024-11-20 17:54:15.536443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.108 [2024-11-20 17:54:15.536497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.108 [2024-11-20 17:54:15.536506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.108 [2024-11-20 17:54:15.536513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.108 [2024-11-20 17:54:15.536519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.108 [2024-11-20 17:54:15.537329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.681 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.681 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:57.681 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.681 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.681 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:08:57.943 Unsupported transport: rdma 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:57.943 nvmf_trace.0 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:57.943 rmmod nvme_rdma 00:08:57.943 rmmod nvme_fabrics 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3412527 ']' 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3412527 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3412527 ']' 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3412527 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3412527 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:57.943 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:57.944 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3412527' 00:08:57.944 killing process with pid 3412527 00:08:57.944 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3412527 00:08:57.944 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3412527 00:08:58.206 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.206 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:58.206 00:08:58.206 real 0m9.766s 00:08:58.206 user 0m3.828s 00:08:58.206 sys 0m6.621s 00:08:58.206 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.206 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.206 ************************************ 00:08:58.206 END TEST nvmf_zcopy 00:08:58.206 ************************************ 00:08:58.206 17:54:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:08:58.206 17:54:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.206 17:54:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.206 17:54:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.206 ************************************ 00:08:58.206 START TEST nvmf_nmic 00:08:58.206 ************************************ 00:08:58.206 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:08:58.206 * Looking for test storage... 00:08:58.206 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:58.206 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.470 --rc genhtml_branch_coverage=1 00:08:58.470 --rc genhtml_function_coverage=1 00:08:58.470 --rc genhtml_legend=1 00:08:58.470 --rc geninfo_all_blocks=1 00:08:58.470 --rc geninfo_unexecuted_blocks=1 00:08:58.470 00:08:58.470 ' 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.470 --rc genhtml_branch_coverage=1 00:08:58.470 --rc genhtml_function_coverage=1 00:08:58.470 --rc genhtml_legend=1 00:08:58.470 --rc geninfo_all_blocks=1 00:08:58.470 --rc geninfo_unexecuted_blocks=1 00:08:58.470 00:08:58.470 ' 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.470 --rc genhtml_branch_coverage=1 00:08:58.470 --rc genhtml_function_coverage=1 00:08:58.470 --rc genhtml_legend=1 00:08:58.470 --rc geninfo_all_blocks=1 00:08:58.470 --rc geninfo_unexecuted_blocks=1 00:08:58.470 00:08:58.470 ' 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.470 --rc genhtml_branch_coverage=1 00:08:58.470 --rc genhtml_function_coverage=1 00:08:58.470 --rc genhtml_legend=1 00:08:58.470 --rc geninfo_all_blocks=1 00:08:58.470 --rc geninfo_unexecuted_blocks=1 00:08:58.470 00:08:58.470 ' 00:08:58.470 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.471 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.471 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:06.624 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:06.624 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:06.624 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:06.625 Found net devices under 0000:98:00.0: mlx_0_0 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:06.625 Found net devices under 0000:98:00.1: mlx_0_1 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:06.625 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:06.886 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:06.886 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:06.887 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:06.887 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:06.887 altname enp152s0f0np0 00:09:06.887 altname ens817f0np0 00:09:06.887 inet 192.168.100.8/24 scope global mlx_0_0 00:09:06.887 valid_lft forever preferred_lft forever 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:06.887 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:06.887 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:06.887 altname enp152s0f1np1 00:09:06.887 altname ens817f1np1 00:09:06.887 inet 192.168.100.9/24 scope global mlx_0_1 00:09:06.887 valid_lft forever preferred_lft forever 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:06.887 192.168.100.9' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:06.887 192.168.100.9' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:06.887 192.168.100.9' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3417201 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3417201 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3417201 ']' 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.887 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:06.887 [2024-11-20 17:54:25.405949] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:09:06.887 [2024-11-20 17:54:25.406002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.148 [2024-11-20 17:54:25.485806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.148 [2024-11-20 17:54:25.522715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.148 [2024-11-20 17:54:25.522749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.148 [2024-11-20 17:54:25.522757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.148 [2024-11-20 17:54:25.522764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.148 [2024-11-20 17:54:25.522770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.148 [2024-11-20 17:54:25.524289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.148 [2024-11-20 17:54:25.524402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.148 [2024-11-20 17:54:25.524557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.148 [2024-11-20 17:54:25.524558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.723 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.723 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:07.723 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.723 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:07.723 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.723 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.723 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:07.723 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.723 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.984 [2024-11-20 17:54:26.286919] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x72bf30/0x730420) succeed. 00:09:07.984 [2024-11-20 17:54:26.301663] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x72d5c0/0x771ac0) succeed. 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.984 Malloc0 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.984 [2024-11-20 17:54:26.487752] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:07.984 test case1: single bdev can't be used in multiple subsystems 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.984 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.246 [2024-11-20 17:54:26.523473] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:08.246 [2024-11-20 17:54:26.523493] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:08.246 [2024-11-20 17:54:26.523500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 request: 00:09:08.246 { 00:09:08.246 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:08.246 "namespace": { 00:09:08.246 "bdev_name": "Malloc0", 00:09:08.246 "no_auto_visible": false 00:09:08.246 }, 00:09:08.246 "method": "nvmf_subsystem_add_ns", 00:09:08.246 "req_id": 1 00:09:08.246 } 00:09:08.246 Got JSON-RPC error response 00:09:08.246 response: 00:09:08.246 { 00:09:08.246 "code": -32602, 00:09:08.246 "message": "Invalid parameters" 00:09:08.246 } 00:09:08.246 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:08.246 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:08.246 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:08.246 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:08.246 Adding namespace failed - expected result. 00:09:08.246 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:08.246 test case2: host connect to nvmf target in multiple paths 00:09:08.246 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:09:08.246 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.246 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.246 [2024-11-20 17:54:26.535549] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:09:08.246 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.246 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:09.633 17:54:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:09:11.019 17:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.019 17:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:11.019 17:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.019 17:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:11.019 17:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:13.566 17:54:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:13.566 17:54:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:13.566 17:54:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.566 17:54:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:13.566 17:54:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.566 17:54:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:13.566 17:54:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:13.566 [global] 00:09:13.566 thread=1 00:09:13.566 invalidate=1 00:09:13.566 rw=write 00:09:13.566 time_based=1 00:09:13.566 runtime=1 00:09:13.566 ioengine=libaio 00:09:13.566 direct=1 00:09:13.566 bs=4096 00:09:13.566 iodepth=1 00:09:13.566 norandommap=0 00:09:13.566 numjobs=1 00:09:13.566 00:09:13.566 verify_dump=1 00:09:13.566 verify_backlog=512 00:09:13.566 verify_state_save=0 00:09:13.566 do_verify=1 00:09:13.566 verify=crc32c-intel 00:09:13.566 [job0] 00:09:13.566 filename=/dev/nvme0n1 00:09:13.566 Could not set queue depth (nvme0n1) 00:09:13.566 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.566 fio-3.35 00:09:13.566 Starting 1 thread 00:09:14.511 00:09:14.511 job0: (groupid=0, jobs=1): err= 0: pid=3418750: Wed Nov 20 17:54:33 2024 00:09:14.511 read: IOPS=7989, BW=31.2MiB/s (32.7MB/s)(31.2MiB/1001msec) 00:09:14.511 slat (nsec): min=5941, max=27880, avg=6375.84, stdev=745.44 00:09:14.511 clat (nsec): min=34601, max=88808, avg=52834.93, stdev=3469.62 00:09:14.511 lat (nsec): min=51385, max=94942, avg=59210.77, stdev=3497.08 00:09:14.511 clat percentiles (nsec): 00:09:14.511 | 1.00th=[46848], 5.00th=[47872], 10.00th=[48896], 20.00th=[49920], 00:09:14.511 | 30.00th=[50432], 40.00th=[51456], 50.00th=[52480], 60.00th=[53504], 00:09:14.511 | 70.00th=[54528], 80.00th=[55552], 90.00th=[57600], 95.00th=[59136], 00:09:14.511 | 99.00th=[62208], 99.50th=[63232], 99.90th=[67072], 99.95th=[74240], 00:09:14.511 | 99.99th=[88576] 00:09:14.511 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:09:14.511 slat (nsec): min=7701, max=44052, avg=8745.73, stdev=1232.13 00:09:14.511 clat (usec): min=35, max=251, avg=51.36, stdev= 7.59 00:09:14.511 lat (usec): min=51, max=285, avg=60.11, stdev= 8.04 00:09:14.511 clat percentiles (usec): 00:09:14.511 | 1.00th=[ 45], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 48], 00:09:14.511 | 30.00th=[ 49], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:09:14.511 | 70.00th=[ 53], 80.00th=[ 55], 90.00th=[ 57], 95.00th=[ 58], 00:09:14.511 | 99.00th=[ 62], 99.50th=[ 67], 99.90th=[ 196], 99.95th=[ 208], 00:09:14.511 | 99.99th=[ 253] 00:09:14.511 bw ( KiB/s): min=32768, max=32768, per=100.00%, avg=32768.00, stdev= 0.00, samples=1 00:09:14.511 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:09:14.511 lat (usec) : 50=32.66%, 100=67.22%, 250=0.11%, 500=0.01% 00:09:14.511 cpu : usr=11.20%, sys=15.20%, ctx=16189, majf=0, minf=1 00:09:14.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.511 issued rwts: total=7997,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.511 00:09:14.511 Run status group 0 (all jobs): 00:09:14.511 READ: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=31.2MiB (32.8MB), run=1001-1001msec 00:09:14.511 WRITE: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:09:14.511 00:09:14.511 Disk stats (read/write): 00:09:14.511 nvme0n1: ios=7218/7368, merge=0/0, ticks=341/325, in_queue=666, util=91.08% 00:09:14.511 17:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:17.230 rmmod nvme_rdma 00:09:17.230 rmmod nvme_fabrics 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3417201 ']' 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3417201 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3417201 ']' 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3417201 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3417201 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3417201' 00:09:17.230 killing process with pid 3417201 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3417201 00:09:17.230 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3417201 00:09:17.491 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.491 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:17.491 00:09:17.491 real 0m19.246s 00:09:17.491 user 0m55.533s 00:09:17.491 sys 0m7.305s 00:09:17.491 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.491 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.491 ************************************ 00:09:17.491 END TEST nvmf_nmic 00:09:17.491 ************************************ 00:09:17.491 17:54:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:17.491 17:54:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:17.491 17:54:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.491 17:54:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.491 ************************************ 00:09:17.491 START TEST nvmf_fio_target 00:09:17.491 ************************************ 00:09:17.491 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:17.753 * Looking for test storage... 00:09:17.753 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:17.753 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:17.753 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:17.753 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:17.753 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:17.753 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.753 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.753 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.753 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.753 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:17.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.754 --rc genhtml_branch_coverage=1 00:09:17.754 --rc genhtml_function_coverage=1 00:09:17.754 --rc genhtml_legend=1 00:09:17.754 --rc geninfo_all_blocks=1 00:09:17.754 --rc geninfo_unexecuted_blocks=1 00:09:17.754 00:09:17.754 ' 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:17.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.754 --rc genhtml_branch_coverage=1 00:09:17.754 --rc genhtml_function_coverage=1 00:09:17.754 --rc genhtml_legend=1 00:09:17.754 --rc geninfo_all_blocks=1 00:09:17.754 --rc geninfo_unexecuted_blocks=1 00:09:17.754 00:09:17.754 ' 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:17.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.754 --rc genhtml_branch_coverage=1 00:09:17.754 --rc genhtml_function_coverage=1 00:09:17.754 --rc genhtml_legend=1 00:09:17.754 --rc geninfo_all_blocks=1 00:09:17.754 --rc geninfo_unexecuted_blocks=1 00:09:17.754 00:09:17.754 ' 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:17.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.754 --rc genhtml_branch_coverage=1 00:09:17.754 --rc genhtml_function_coverage=1 00:09:17.754 --rc genhtml_legend=1 00:09:17.754 --rc geninfo_all_blocks=1 00:09:17.754 --rc geninfo_unexecuted_blocks=1 00:09:17.754 00:09:17.754 ' 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.754 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.755 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:17.755 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.907 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:25.907 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:25.908 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:25.908 Found net devices under 0000:98:00.0: mlx_0_0 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:25.908 Found net devices under 0000:98:00.1: mlx_0_1 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:25.908 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:25.908 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:25.908 altname enp152s0f0np0 00:09:25.908 altname ens817f0np0 00:09:25.908 inet 192.168.100.8/24 scope global mlx_0_0 00:09:25.908 valid_lft forever preferred_lft forever 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:25.908 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:25.908 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:25.908 altname enp152s0f1np1 00:09:25.908 altname ens817f1np1 00:09:25.908 inet 192.168.100.9/24 scope global mlx_0_1 00:09:25.908 valid_lft forever preferred_lft forever 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:25.908 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:25.909 192.168.100.9' 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:25.909 192.168.100.9' 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:25.909 192.168.100.9' 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:09:25.909 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3423788 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3423788 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3423788 ']' 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.909 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.909 [2024-11-20 17:54:44.102894] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:09:25.909 [2024-11-20 17:54:44.102949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.909 [2024-11-20 17:54:44.182582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.909 [2024-11-20 17:54:44.217854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.909 [2024-11-20 17:54:44.217889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.909 [2024-11-20 17:54:44.217897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.909 [2024-11-20 17:54:44.217903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.909 [2024-11-20 17:54:44.217909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.909 [2024-11-20 17:54:44.219445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.909 [2024-11-20 17:54:44.219562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.909 [2024-11-20 17:54:44.219716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.909 [2024-11-20 17:54:44.219717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.482 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.482 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:26.482 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.482 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.482 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.482 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.482 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:26.744 [2024-11-20 17:54:45.131336] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1306f30/0x130b420) succeed. 00:09:26.744 [2024-11-20 17:54:45.145735] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13085c0/0x134cac0) succeed. 00:09:27.005 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.005 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:27.005 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.267 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:27.267 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.534 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:27.534 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.794 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:27.794 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:27.795 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:28.055 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:28.055 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:28.316 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:28.316 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:28.316 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:28.316 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:28.577 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.840 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:28.840 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.102 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:29.102 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:29.102 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:29.363 [2024-11-20 17:54:47.716902] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:29.363 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:29.624 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:29.624 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:31.540 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:31.540 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:31.540 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.540 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:31.540 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:31.540 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:33.463 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:33.463 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:33.463 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.463 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:33.463 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.463 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:33.463 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:33.463 [global] 00:09:33.463 thread=1 00:09:33.463 invalidate=1 00:09:33.463 rw=write 00:09:33.463 time_based=1 00:09:33.463 runtime=1 00:09:33.463 ioengine=libaio 00:09:33.463 direct=1 00:09:33.463 bs=4096 00:09:33.463 iodepth=1 00:09:33.463 norandommap=0 00:09:33.463 numjobs=1 00:09:33.463 00:09:33.463 verify_dump=1 00:09:33.463 verify_backlog=512 00:09:33.463 verify_state_save=0 00:09:33.463 do_verify=1 00:09:33.463 verify=crc32c-intel 00:09:33.463 [job0] 00:09:33.463 filename=/dev/nvme0n1 00:09:33.463 [job1] 00:09:33.463 filename=/dev/nvme0n2 00:09:33.463 [job2] 00:09:33.463 filename=/dev/nvme0n3 00:09:33.463 [job3] 00:09:33.463 filename=/dev/nvme0n4 00:09:33.463 Could not set queue depth (nvme0n1) 00:09:33.463 Could not set queue depth (nvme0n2) 00:09:33.463 Could not set queue depth (nvme0n3) 00:09:33.463 Could not set queue depth (nvme0n4) 00:09:33.723 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.723 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.723 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.723 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.723 fio-3.35 00:09:33.723 Starting 4 threads 00:09:35.110 00:09:35.110 job0: (groupid=0, jobs=1): err= 0: pid=3425565: Wed Nov 20 17:54:53 2024 00:09:35.110 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(9.95MiB/1001msec) 00:09:35.110 slat (nsec): min=5475, max=48743, avg=16430.08, stdev=11644.14 00:09:35.110 clat (usec): min=44, max=471, avg=170.45, stdev=101.62 00:09:35.110 lat (usec): min=50, max=478, avg=186.88, stdev=108.59 00:09:35.110 clat percentiles (usec): 00:09:35.110 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 72], 00:09:35.110 | 30.00th=[ 81], 40.00th=[ 94], 50.00th=[ 125], 60.00th=[ 233], 00:09:35.110 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 293], 95.00th=[ 347], 00:09:35.110 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 445], 99.95th=[ 457], 00:09:35.110 | 99.99th=[ 469] 00:09:35.110 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:35.110 slat (nsec): min=8165, max=55450, avg=20359.53, stdev=13169.39 00:09:35.110 clat (usec): min=43, max=461, avg=174.67, stdev=100.70 00:09:35.110 lat (usec): min=53, max=471, avg=195.03, stdev=108.42 00:09:35.110 clat percentiles (usec): 00:09:35.110 | 1.00th=[ 49], 5.00th=[ 53], 10.00th=[ 63], 20.00th=[ 73], 00:09:35.110 | 30.00th=[ 83], 40.00th=[ 98], 50.00th=[ 190], 60.00th=[ 235], 00:09:35.110 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 297], 95.00th=[ 338], 00:09:35.110 | 99.00th=[ 400], 99.50th=[ 424], 99.90th=[ 445], 99.95th=[ 457], 00:09:35.110 | 99.99th=[ 461] 00:09:35.110 bw ( KiB/s): min=12288, max=12288, per=31.36%, avg=12288.00, stdev= 0.00, samples=1 00:09:35.110 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:35.110 lat (usec) : 50=1.88%, 100=40.81%, 250=27.28%, 500=30.04% 00:09:35.110 cpu : usr=7.10%, sys=12.10%, ctx=5108, majf=0, minf=1 00:09:35.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.110 issued rwts: total=2547,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.110 job1: (groupid=0, jobs=1): err= 0: pid=3425581: Wed Nov 20 17:54:53 2024 00:09:35.110 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:35.110 slat (nsec): min=5941, max=63763, avg=18930.72, stdev=11854.95 00:09:35.110 clat (usec): min=59, max=477, avg=213.59, stdev=91.92 00:09:35.110 lat (usec): min=66, max=484, avg=232.52, stdev=95.76 00:09:35.110 clat percentiles (usec): 00:09:35.110 | 1.00th=[ 64], 5.00th=[ 69], 10.00th=[ 73], 20.00th=[ 82], 00:09:35.110 | 30.00th=[ 161], 40.00th=[ 231], 50.00th=[ 245], 60.00th=[ 255], 00:09:35.110 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 310], 95.00th=[ 359], 00:09:35.110 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 449], 99.95th=[ 457], 00:09:35.110 | 99.99th=[ 478] 00:09:35.110 write: IOPS=2137, BW=8551KiB/s (8757kB/s)(8560KiB/1001msec); 0 zone resets 00:09:35.110 slat (nsec): min=8147, max=56266, avg=22299.48, stdev=13105.09 00:09:35.110 clat (usec): min=59, max=506, avg=211.70, stdev=94.42 00:09:35.110 lat (usec): min=67, max=540, avg=234.00, stdev=100.04 00:09:35.110 clat percentiles (usec): 00:09:35.110 | 1.00th=[ 64], 5.00th=[ 69], 10.00th=[ 72], 20.00th=[ 80], 00:09:35.110 | 30.00th=[ 155], 40.00th=[ 217], 50.00th=[ 243], 60.00th=[ 255], 00:09:35.110 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 351], 00:09:35.110 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 465], 99.95th=[ 490], 00:09:35.110 | 99.99th=[ 506] 00:09:35.110 bw ( KiB/s): min= 8320, max= 8320, per=21.23%, avg=8320.00, stdev= 0.00, samples=1 00:09:35.110 iops : min= 2080, max= 2080, avg=2080.00, stdev= 0.00, samples=1 00:09:35.110 lat (usec) : 100=22.71%, 250=33.57%, 500=43.70%, 750=0.02% 00:09:35.110 cpu : usr=5.90%, sys=11.70%, ctx=4188, majf=0, minf=1 00:09:35.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.110 issued rwts: total=2048,2140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.110 job2: (groupid=0, jobs=1): err= 0: pid=3425602: Wed Nov 20 17:54:53 2024 00:09:35.110 read: IOPS=2302, BW=9211KiB/s (9432kB/s)(9220KiB/1001msec) 00:09:35.110 slat (nsec): min=6148, max=50309, avg=17164.75, stdev=12346.40 00:09:35.110 clat (usec): min=51, max=461, avg=177.23, stdev=99.34 00:09:35.110 lat (usec): min=57, max=491, avg=194.40, stdev=106.08 00:09:35.110 clat percentiles (usec): 00:09:35.110 | 1.00th=[ 56], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 78], 00:09:35.110 | 30.00th=[ 88], 40.00th=[ 99], 50.00th=[ 186], 60.00th=[ 235], 00:09:35.110 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 293], 95.00th=[ 351], 00:09:35.110 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 445], 99.95th=[ 449], 00:09:35.110 | 99.99th=[ 461] 00:09:35.110 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:35.110 slat (nsec): min=8311, max=56336, avg=21202.64, stdev=13646.92 00:09:35.110 clat (usec): min=50, max=475, avg=185.04, stdev=99.30 00:09:35.110 lat (usec): min=59, max=487, avg=206.25, stdev=106.81 00:09:35.110 clat percentiles (usec): 00:09:35.110 | 1.00th=[ 55], 5.00th=[ 62], 10.00th=[ 70], 20.00th=[ 78], 00:09:35.110 | 30.00th=[ 92], 40.00th=[ 110], 50.00th=[ 206], 60.00th=[ 239], 00:09:35.110 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[ 351], 00:09:35.110 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 457], 99.95th=[ 465], 00:09:35.110 | 99.99th=[ 478] 00:09:35.110 bw ( KiB/s): min=10832, max=10832, per=27.64%, avg=10832.00, stdev= 0.00, samples=1 00:09:35.110 iops : min= 2708, max= 2708, avg=2708.00, stdev= 0.00, samples=1 00:09:35.110 lat (usec) : 100=38.25%, 250=30.87%, 500=30.87% 00:09:35.110 cpu : usr=6.30%, sys=12.60%, ctx=4866, majf=0, minf=1 00:09:35.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.110 issued rwts: total=2305,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.110 job3: (groupid=0, jobs=1): err= 0: pid=3425609: Wed Nov 20 17:54:53 2024 00:09:35.110 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:35.110 slat (nsec): min=6087, max=50215, avg=18488.86, stdev=12212.22 00:09:35.110 clat (usec): min=49, max=470, avg=201.25, stdev=99.18 00:09:35.110 lat (usec): min=55, max=482, avg=219.74, stdev=104.69 00:09:35.110 clat percentiles (usec): 00:09:35.110 | 1.00th=[ 57], 5.00th=[ 64], 10.00th=[ 72], 20.00th=[ 87], 00:09:35.111 | 30.00th=[ 100], 40.00th=[ 196], 50.00th=[ 237], 60.00th=[ 249], 00:09:35.111 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 318], 95.00th=[ 367], 00:09:35.111 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 453], 99.95th=[ 465], 00:09:35.111 | 99.99th=[ 469] 00:09:35.111 write: IOPS=2543, BW=9.93MiB/s (10.4MB/s)(9.95MiB/1001msec); 0 zone resets 00:09:35.111 slat (nsec): min=8344, max=56803, avg=21777.58, stdev=13599.95 00:09:35.111 clat (usec): min=38, max=474, avg=185.04, stdev=99.28 00:09:35.111 lat (usec): min=60, max=488, avg=206.81, stdev=106.98 00:09:35.111 clat percentiles (usec): 00:09:35.111 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 77], 00:09:35.111 | 30.00th=[ 91], 40.00th=[ 110], 50.00th=[ 212], 60.00th=[ 241], 00:09:35.111 | 70.00th=[ 253], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 347], 00:09:35.111 | 99.00th=[ 396], 99.50th=[ 420], 99.90th=[ 457], 99.95th=[ 469], 00:09:35.111 | 99.99th=[ 474] 00:09:35.111 bw ( KiB/s): min= 8904, max= 8904, per=22.72%, avg=8904.00, stdev= 0.00, samples=1 00:09:35.111 iops : min= 2226, max= 2226, avg=2226.00, stdev= 0.00, samples=1 00:09:35.111 lat (usec) : 50=0.04%, 100=33.15%, 250=31.48%, 500=35.33% 00:09:35.111 cpu : usr=5.40%, sys=13.30%, ctx=4595, majf=0, minf=1 00:09:35.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.111 issued rwts: total=2048,2546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.111 00:09:35.111 Run status group 0 (all jobs): 00:09:35.111 READ: bw=34.9MiB/s (36.6MB/s), 8184KiB/s-9.94MiB/s (8380kB/s-10.4MB/s), io=35.0MiB (36.7MB), run=1001-1001msec 00:09:35.111 WRITE: bw=38.3MiB/s (40.1MB/s), 8551KiB/s-9.99MiB/s (8757kB/s-10.5MB/s), io=38.3MiB (40.2MB), run=1001-1001msec 00:09:35.111 00:09:35.111 Disk stats (read/write): 00:09:35.111 nvme0n1: ios=2098/2208, merge=0/0, ticks=253/235, in_queue=488, util=85.97% 00:09:35.111 nvme0n2: ios=1536/2018, merge=0/0, ticks=205/295, in_queue=500, util=86.16% 00:09:35.111 nvme0n3: ios=2048/2108, merge=0/0, ticks=234/251, in_queue=485, util=88.69% 00:09:35.111 nvme0n4: ios=1733/2048, merge=0/0, ticks=244/246, in_queue=490, util=89.63% 00:09:35.111 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:35.111 [global] 00:09:35.111 thread=1 00:09:35.111 invalidate=1 00:09:35.111 rw=randwrite 00:09:35.111 time_based=1 00:09:35.111 runtime=1 00:09:35.111 ioengine=libaio 00:09:35.111 direct=1 00:09:35.111 bs=4096 00:09:35.111 iodepth=1 00:09:35.111 norandommap=0 00:09:35.111 numjobs=1 00:09:35.111 00:09:35.111 verify_dump=1 00:09:35.111 verify_backlog=512 00:09:35.111 verify_state_save=0 00:09:35.111 do_verify=1 00:09:35.111 verify=crc32c-intel 00:09:35.111 [job0] 00:09:35.111 filename=/dev/nvme0n1 00:09:35.111 [job1] 00:09:35.111 filename=/dev/nvme0n2 00:09:35.111 [job2] 00:09:35.111 filename=/dev/nvme0n3 00:09:35.111 [job3] 00:09:35.111 filename=/dev/nvme0n4 00:09:35.111 Could not set queue depth (nvme0n1) 00:09:35.111 Could not set queue depth (nvme0n2) 00:09:35.111 Could not set queue depth (nvme0n3) 00:09:35.111 Could not set queue depth (nvme0n4) 00:09:35.371 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.371 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.371 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.371 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.371 fio-3.35 00:09:35.371 Starting 4 threads 00:09:36.756 00:09:36.756 job0: (groupid=0, jobs=1): err= 0: pid=3426068: Wed Nov 20 17:54:54 2024 00:09:36.756 read: IOPS=1769, BW=7077KiB/s (7247kB/s)(7084KiB/1001msec) 00:09:36.756 slat (nsec): min=5989, max=49889, avg=21573.98, stdev=11059.03 00:09:36.756 clat (usec): min=48, max=455, avg=222.83, stdev=82.85 00:09:36.756 lat (usec): min=55, max=478, avg=244.40, stdev=85.14 00:09:36.756 clat percentiles (usec): 00:09:36.756 | 1.00th=[ 60], 5.00th=[ 75], 10.00th=[ 89], 20.00th=[ 126], 00:09:36.756 | 30.00th=[ 200], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 249], 00:09:36.756 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 322], 95.00th=[ 355], 00:09:36.756 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 449], 99.95th=[ 457], 00:09:36.756 | 99.99th=[ 457] 00:09:36.756 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:36.756 slat (nsec): min=7940, max=72253, avg=26071.94, stdev=11624.82 00:09:36.756 clat (usec): min=45, max=479, avg=238.84, stdev=75.41 00:09:36.756 lat (usec): min=54, max=488, avg=264.91, stdev=76.79 00:09:36.756 clat percentiles (usec): 00:09:36.756 | 1.00th=[ 63], 5.00th=[ 76], 10.00th=[ 110], 20.00th=[ 196], 00:09:36.756 | 30.00th=[ 223], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 258], 00:09:36.756 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 326], 95.00th=[ 359], 00:09:36.756 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 445], 99.95th=[ 461], 00:09:36.756 | 99.99th=[ 482] 00:09:36.756 bw ( KiB/s): min= 8175, max= 8175, per=15.53%, avg=8175.00, stdev= 0.00, samples=1 00:09:36.756 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:36.756 lat (usec) : 50=0.08%, 100=10.92%, 250=44.99%, 500=44.02% 00:09:36.756 cpu : usr=5.80%, sys=13.00%, ctx=3820, majf=0, minf=1 00:09:36.756 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.756 issued rwts: total=1771,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.756 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.756 job1: (groupid=0, jobs=1): err= 0: pid=3426080: Wed Nov 20 17:54:54 2024 00:09:36.756 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:09:36.756 slat (nsec): min=6005, max=48347, avg=8065.97, stdev=5984.65 00:09:36.756 clat (usec): min=45, max=464, avg=73.74, stdev=60.94 00:09:36.756 lat (usec): min=52, max=470, avg=81.81, stdev=65.65 00:09:36.756 clat percentiles (usec): 00:09:36.756 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 52], 00:09:36.756 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:09:36.756 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 77], 95.00th=[ 253], 00:09:36.756 | 99.00th=[ 326], 99.50th=[ 367], 99.90th=[ 408], 99.95th=[ 416], 00:09:36.756 | 99.99th=[ 465] 00:09:36.756 write: IOPS=6000, BW=23.4MiB/s (24.6MB/s)(23.5MiB/1001msec); 0 zone resets 00:09:36.756 slat (nsec): min=7921, max=66987, avg=10648.76, stdev=7201.00 00:09:36.756 clat (usec): min=35, max=479, avg=74.21, stdev=64.29 00:09:36.756 lat (usec): min=51, max=513, avg=84.86, stdev=70.12 00:09:36.756 clat percentiles (usec): 00:09:36.756 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:09:36.756 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 54], 60.00th=[ 55], 00:09:36.756 | 70.00th=[ 57], 80.00th=[ 60], 90.00th=[ 91], 95.00th=[ 255], 00:09:36.756 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 420], 99.95th=[ 445], 00:09:36.757 | 99.99th=[ 478] 00:09:36.757 bw ( KiB/s): min=32702, max=32702, per=62.12%, avg=32702.00, stdev= 0.00, samples=1 00:09:36.757 iops : min= 8175, max= 8175, avg=8175.00, stdev= 0.00, samples=1 00:09:36.757 lat (usec) : 50=14.34%, 100=76.55%, 250=3.52%, 500=5.59% 00:09:36.757 cpu : usr=7.80%, sys=15.80%, ctx=11638, majf=0, minf=2 00:09:36.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.757 issued rwts: total=5632,6006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.757 job2: (groupid=0, jobs=1): err= 0: pid=3426089: Wed Nov 20 17:54:54 2024 00:09:36.757 read: IOPS=2811, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:09:36.757 slat (nsec): min=5771, max=63113, avg=12027.95, stdev=9143.39 00:09:36.757 clat (usec): min=37, max=460, avg=161.37, stdev=93.14 00:09:36.757 lat (usec): min=59, max=482, avg=173.40, stdev=97.05 00:09:36.757 clat percentiles (usec): 00:09:36.757 | 1.00th=[ 56], 5.00th=[ 60], 10.00th=[ 63], 20.00th=[ 70], 00:09:36.757 | 30.00th=[ 79], 40.00th=[ 90], 50.00th=[ 112], 60.00th=[ 215], 00:09:36.757 | 70.00th=[ 241], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 302], 00:09:36.757 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 433], 99.95th=[ 453], 00:09:36.757 | 99.99th=[ 461] 00:09:36.757 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:36.757 slat (nsec): min=7765, max=61394, avg=13792.11, stdev=8966.91 00:09:36.757 clat (usec): min=37, max=486, avg=146.40, stdev=91.59 00:09:36.757 lat (usec): min=57, max=494, avg=160.19, stdev=95.80 00:09:36.757 clat percentiles (usec): 00:09:36.757 | 1.00th=[ 53], 5.00th=[ 57], 10.00th=[ 60], 20.00th=[ 65], 00:09:36.757 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 94], 60.00th=[ 155], 00:09:36.757 | 70.00th=[ 227], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 297], 00:09:36.757 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 441], 99.95th=[ 474], 00:09:36.757 | 99.99th=[ 486] 00:09:36.757 bw ( KiB/s): min=14035, max=14035, per=26.66%, avg=14035.00, stdev= 0.00, samples=1 00:09:36.757 iops : min= 3508, max= 3508, avg=3508.00, stdev= 0.00, samples=1 00:09:36.757 lat (usec) : 50=0.15%, 100=49.88%, 250=28.41%, 500=21.56% 00:09:36.757 cpu : usr=5.30%, sys=9.50%, ctx=5886, majf=0, minf=1 00:09:36.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.757 issued rwts: total=2814,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.757 job3: (groupid=0, jobs=1): err= 0: pid=3426091: Wed Nov 20 17:54:54 2024 00:09:36.757 read: IOPS=1917, BW=7668KiB/s (7852kB/s)(7676KiB/1001msec) 00:09:36.757 slat (nsec): min=6206, max=50410, avg=20540.04, stdev=11915.49 00:09:36.757 clat (usec): min=52, max=490, avg=219.48, stdev=89.29 00:09:36.757 lat (usec): min=60, max=496, avg=240.02, stdev=92.76 00:09:36.757 clat percentiles (usec): 00:09:36.757 | 1.00th=[ 60], 5.00th=[ 74], 10.00th=[ 84], 20.00th=[ 109], 00:09:36.757 | 30.00th=[ 192], 40.00th=[ 227], 50.00th=[ 239], 60.00th=[ 249], 00:09:36.757 | 70.00th=[ 262], 80.00th=[ 281], 90.00th=[ 330], 95.00th=[ 363], 00:09:36.757 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 465], 99.95th=[ 490], 00:09:36.757 | 99.99th=[ 490] 00:09:36.757 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:36.757 slat (nsec): min=7575, max=55272, avg=24069.96, stdev=12703.19 00:09:36.757 clat (usec): min=51, max=528, avg=227.93, stdev=85.93 00:09:36.757 lat (usec): min=60, max=538, avg=252.00, stdev=88.97 00:09:36.757 clat percentiles (usec): 00:09:36.757 | 1.00th=[ 57], 5.00th=[ 70], 10.00th=[ 83], 20.00th=[ 145], 00:09:36.757 | 30.00th=[ 202], 40.00th=[ 231], 50.00th=[ 243], 60.00th=[ 255], 00:09:36.757 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 330], 95.00th=[ 359], 00:09:36.757 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 482], 99.95th=[ 519], 00:09:36.757 | 99.99th=[ 529] 00:09:36.757 bw ( KiB/s): min= 8359, max= 8359, per=15.88%, avg=8359.00, stdev= 0.00, samples=1 00:09:36.757 iops : min= 2089, max= 2089, avg=2089.00, stdev= 0.00, samples=1 00:09:36.757 lat (usec) : 100=14.82%, 250=43.69%, 500=41.44%, 750=0.05% 00:09:36.757 cpu : usr=6.40%, sys=11.80%, ctx=3967, majf=0, minf=1 00:09:36.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.757 issued rwts: total=1919,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.757 00:09:36.757 Run status group 0 (all jobs): 00:09:36.757 READ: bw=47.4MiB/s (49.7MB/s), 7077KiB/s-22.0MiB/s (7247kB/s-23.0MB/s), io=47.4MiB (49.7MB), run=1001-1001msec 00:09:36.757 WRITE: bw=51.4MiB/s (53.9MB/s), 8184KiB/s-23.4MiB/s (8380kB/s-24.6MB/s), io=51.5MiB (54.0MB), run=1001-1001msec 00:09:36.757 00:09:36.757 Disk stats (read/write): 00:09:36.757 nvme0n1: ios=1586/1720, merge=0/0, ticks=205/220, in_queue=425, util=85.27% 00:09:36.757 nvme0n2: ios=5453/5632, merge=0/0, ticks=283/284, in_queue=567, util=85.96% 00:09:36.757 nvme0n3: ios=2560/2774, merge=0/0, ticks=324/308, in_queue=632, util=88.57% 00:09:36.757 nvme0n4: ios=1536/1872, merge=0/0, ticks=219/286, in_queue=505, util=89.51% 00:09:36.757 17:54:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:36.757 [global] 00:09:36.757 thread=1 00:09:36.757 invalidate=1 00:09:36.757 rw=write 00:09:36.757 time_based=1 00:09:36.757 runtime=1 00:09:36.757 ioengine=libaio 00:09:36.757 direct=1 00:09:36.757 bs=4096 00:09:36.757 iodepth=128 00:09:36.757 norandommap=0 00:09:36.757 numjobs=1 00:09:36.757 00:09:36.757 verify_dump=1 00:09:36.757 verify_backlog=512 00:09:36.757 verify_state_save=0 00:09:36.757 do_verify=1 00:09:36.757 verify=crc32c-intel 00:09:36.757 [job0] 00:09:36.757 filename=/dev/nvme0n1 00:09:36.757 [job1] 00:09:36.757 filename=/dev/nvme0n2 00:09:36.757 [job2] 00:09:36.757 filename=/dev/nvme0n3 00:09:36.757 [job3] 00:09:36.757 filename=/dev/nvme0n4 00:09:36.757 Could not set queue depth (nvme0n1) 00:09:36.757 Could not set queue depth (nvme0n2) 00:09:36.757 Could not set queue depth (nvme0n3) 00:09:36.757 Could not set queue depth (nvme0n4) 00:09:37.019 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.019 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.019 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.019 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.019 fio-3.35 00:09:37.019 Starting 4 threads 00:09:37.965 00:09:37.965 job0: (groupid=0, jobs=1): err= 0: pid=3426544: Wed Nov 20 17:54:56 2024 00:09:37.965 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:09:37.965 slat (nsec): min=1199, max=2017.2k, avg=72759.66, stdev=246648.93 00:09:37.965 clat (usec): min=7545, max=11778, avg=9480.96, stdev=468.26 00:09:37.965 lat (usec): min=7547, max=11787, avg=9553.72, stdev=500.50 00:09:37.965 clat percentiles (usec): 00:09:37.965 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9110], 00:09:37.965 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9503], 00:09:37.965 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10421], 00:09:37.965 | 99.00th=[10945], 99.50th=[11076], 99.90th=[11338], 99.95th=[11338], 00:09:37.965 | 99.99th=[11731] 00:09:37.965 write: IOPS=7060, BW=27.6MiB/s (28.9MB/s)(27.7MiB/1005msec); 0 zone resets 00:09:37.965 slat (nsec): min=1700, max=2126.9k, avg=70504.06, stdev=248821.91 00:09:37.965 clat (usec): min=3550, max=14586, avg=9053.26, stdev=686.03 00:09:37.965 lat (usec): min=5513, max=14588, avg=9123.77, stdev=710.67 00:09:37.965 clat percentiles (usec): 00:09:37.965 | 1.00th=[ 6915], 5.00th=[ 8160], 10.00th=[ 8356], 20.00th=[ 8717], 00:09:37.965 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9110], 00:09:37.965 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10159], 00:09:37.965 | 99.00th=[10945], 99.50th=[11600], 99.90th=[12518], 99.95th=[14615], 00:09:37.965 | 99.99th=[14615] 00:09:37.965 bw ( KiB/s): min=27080, max=28672, per=21.13%, avg=27876.00, stdev=1125.71, samples=2 00:09:37.965 iops : min= 6770, max= 7168, avg=6969.00, stdev=281.43, samples=2 00:09:37.965 lat (msec) : 4=0.01%, 10=91.13%, 20=8.86% 00:09:37.965 cpu : usr=2.29%, sys=4.58%, ctx=1245, majf=0, minf=1 00:09:37.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:37.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.966 issued rwts: total=6656,7096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.966 job1: (groupid=0, jobs=1): err= 0: pid=3426559: Wed Nov 20 17:54:56 2024 00:09:37.966 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:09:37.966 slat (nsec): min=1182, max=2135.5k, avg=85829.76, stdev=257332.31 00:09:37.966 clat (usec): min=7764, max=16990, avg=11127.69, stdev=2373.24 00:09:37.966 lat (usec): min=7767, max=17000, avg=11213.52, stdev=2397.88 00:09:37.966 clat percentiles (usec): 00:09:37.966 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9241], 00:09:37.966 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[10028], 00:09:37.966 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14484], 95.00th=[14746], 00:09:37.966 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15795], 99.95th=[16188], 00:09:37.966 | 99.99th=[16909] 00:09:37.966 write: IOPS=5992, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1005msec); 0 zone resets 00:09:37.966 slat (nsec): min=1683, max=2453.0k, avg=83407.87, stdev=255439.79 00:09:37.966 clat (usec): min=3860, max=17769, avg=10731.61, stdev=2562.15 00:09:37.966 lat (usec): min=4371, max=17771, avg=10815.02, stdev=2586.77 00:09:37.966 clat percentiles (usec): 00:09:37.966 | 1.00th=[ 6980], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8717], 00:09:37.966 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9765], 00:09:37.966 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14484], 00:09:37.966 | 99.00th=[15270], 99.50th=[15533], 99.90th=[17171], 99.95th=[17171], 00:09:37.966 | 99.99th=[17695] 00:09:37.966 bw ( KiB/s): min=18488, max=28672, per=17.87%, avg=23580.00, stdev=7201.18, samples=2 00:09:37.966 iops : min= 4622, max= 7168, avg=5895.00, stdev=1800.29, samples=2 00:09:37.966 lat (msec) : 4=0.01%, 10=60.74%, 20=39.25% 00:09:37.966 cpu : usr=2.09%, sys=4.48%, ctx=1585, majf=0, minf=2 00:09:37.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:37.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.966 issued rwts: total=5632,6022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.966 job2: (groupid=0, jobs=1): err= 0: pid=3426578: Wed Nov 20 17:54:56 2024 00:09:37.966 read: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec) 00:09:37.966 slat (nsec): min=1229, max=3845.0k, avg=64670.05, stdev=225002.62 00:09:37.966 clat (usec): min=1950, max=18422, avg=8464.78, stdev=3718.71 00:09:37.966 lat (usec): min=1958, max=18425, avg=8529.45, stdev=3748.40 00:09:37.966 clat percentiles (usec): 00:09:37.966 | 1.00th=[ 4228], 5.00th=[ 5407], 10.00th=[ 5669], 20.00th=[ 5932], 00:09:37.966 | 30.00th=[ 6063], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6587], 00:09:37.966 | 70.00th=[ 7832], 80.00th=[13960], 90.00th=[14353], 95.00th=[14615], 00:09:37.966 | 99.00th=[15270], 99.50th=[15533], 99.90th=[17171], 99.95th=[17171], 00:09:37.966 | 99.99th=[18482] 00:09:37.966 write: IOPS=7700, BW=30.1MiB/s (31.5MB/s)(30.2MiB/1005msec); 0 zone resets 00:09:37.966 slat (nsec): min=1717, max=3102.9k, avg=62449.91, stdev=218812.56 00:09:37.966 clat (usec): min=3842, max=17072, avg=8044.46, stdev=3627.37 00:09:37.966 lat (usec): min=4236, max=17633, avg=8106.91, stdev=3658.60 00:09:37.966 clat percentiles (usec): 00:09:37.966 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5669], 00:09:37.966 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6259], 00:09:37.966 | 70.00th=[ 6652], 80.00th=[13698], 90.00th=[14091], 95.00th=[14353], 00:09:37.966 | 99.00th=[15008], 99.50th=[15401], 99.90th=[16188], 99.95th=[16909], 00:09:37.966 | 99.99th=[17171] 00:09:37.966 bw ( KiB/s): min=18536, max=42904, per=23.29%, avg=30720.00, stdev=17230.78, samples=2 00:09:37.966 iops : min= 4634, max=10726, avg=7680.00, stdev=4307.69, samples=2 00:09:37.966 lat (msec) : 2=0.06%, 4=0.38%, 10=71.71%, 20=27.85% 00:09:37.966 cpu : usr=3.59%, sys=4.98%, ctx=1582, majf=0, minf=1 00:09:37.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:37.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.966 issued rwts: total=7680,7739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.966 job3: (groupid=0, jobs=1): err= 0: pid=3426584: Wed Nov 20 17:54:56 2024 00:09:37.966 read: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(46.2MiB/1004msec) 00:09:37.966 slat (nsec): min=1273, max=2466.8k, avg=40798.20, stdev=154066.39 00:09:37.966 clat (usec): min=3767, max=10761, avg=5407.74, stdev=803.53 00:09:37.966 lat (usec): min=3769, max=10763, avg=5448.54, stdev=807.36 00:09:37.966 clat percentiles (usec): 00:09:37.966 | 1.00th=[ 4146], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4752], 00:09:37.966 | 30.00th=[ 4817], 40.00th=[ 4883], 50.00th=[ 5080], 60.00th=[ 5735], 00:09:37.966 | 70.00th=[ 5932], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6521], 00:09:37.966 | 99.00th=[ 8455], 99.50th=[ 9241], 99.90th=[ 9503], 99.95th=[ 9503], 00:09:37.966 | 99.99th=[10552] 00:09:37.966 write: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(48.0MiB/1004msec); 0 zone resets 00:09:37.966 slat (nsec): min=1780, max=1731.4k, avg=39931.16, stdev=148567.68 00:09:37.966 clat (usec): min=1167, max=8469, avg=5170.08, stdev=760.03 00:09:37.966 lat (usec): min=1177, max=8477, avg=5210.01, stdev=763.64 00:09:37.966 clat percentiles (usec): 00:09:37.966 | 1.00th=[ 3916], 5.00th=[ 4359], 10.00th=[ 4424], 20.00th=[ 4555], 00:09:37.966 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 5473], 00:09:37.966 | 70.00th=[ 5669], 80.00th=[ 5932], 90.00th=[ 6128], 95.00th=[ 6325], 00:09:37.966 | 99.00th=[ 7439], 99.50th=[ 7635], 99.90th=[ 8455], 99.95th=[ 8455], 00:09:37.966 | 99.99th=[ 8455] 00:09:37.966 bw ( KiB/s): min=44480, max=53128, per=37.00%, avg=48804.00, stdev=6115.06, samples=2 00:09:37.966 iops : min=11120, max=13282, avg=12201.00, stdev=1528.76, samples=2 00:09:37.966 lat (msec) : 2=0.02%, 4=0.94%, 10=99.03%, 20=0.01% 00:09:37.966 cpu : usr=3.69%, sys=7.78%, ctx=1498, majf=0, minf=1 00:09:37.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:09:37.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.966 issued rwts: total=11816,12288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.966 00:09:37.966 Run status group 0 (all jobs): 00:09:37.966 READ: bw=124MiB/s (130MB/s), 21.9MiB/s-46.0MiB/s (23.0MB/s-48.2MB/s), io=124MiB (130MB), run=1004-1005msec 00:09:37.966 WRITE: bw=129MiB/s (135MB/s), 23.4MiB/s-47.8MiB/s (24.5MB/s-50.1MB/s), io=129MiB (136MB), run=1004-1005msec 00:09:37.966 00:09:37.966 Disk stats (read/write): 00:09:37.966 nvme0n1: ios=5682/5993, merge=0/0, ticks=25557/25966, in_queue=51523, util=85.57% 00:09:37.966 nvme0n2: ios=5100/5120, merge=0/0, ticks=23206/22003, in_queue=45209, util=86.06% 00:09:37.966 nvme0n3: ios=6803/7168, merge=0/0, ticks=17848/17263, in_queue=35111, util=88.59% 00:09:37.966 nvme0n4: ios=9728/10045, merge=0/0, ticks=52213/51327, in_queue=103540, util=89.52% 00:09:38.227 17:54:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:38.227 [global] 00:09:38.227 thread=1 00:09:38.227 invalidate=1 00:09:38.227 rw=randwrite 00:09:38.227 time_based=1 00:09:38.227 runtime=1 00:09:38.227 ioengine=libaio 00:09:38.227 direct=1 00:09:38.227 bs=4096 00:09:38.227 iodepth=128 00:09:38.227 norandommap=0 00:09:38.227 numjobs=1 00:09:38.227 00:09:38.227 verify_dump=1 00:09:38.227 verify_backlog=512 00:09:38.227 verify_state_save=0 00:09:38.227 do_verify=1 00:09:38.227 verify=crc32c-intel 00:09:38.227 [job0] 00:09:38.227 filename=/dev/nvme0n1 00:09:38.227 [job1] 00:09:38.227 filename=/dev/nvme0n2 00:09:38.227 [job2] 00:09:38.227 filename=/dev/nvme0n3 00:09:38.227 [job3] 00:09:38.227 filename=/dev/nvme0n4 00:09:38.227 Could not set queue depth (nvme0n1) 00:09:38.227 Could not set queue depth (nvme0n2) 00:09:38.227 Could not set queue depth (nvme0n3) 00:09:38.227 Could not set queue depth (nvme0n4) 00:09:38.487 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:38.487 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:38.487 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:38.487 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:38.487 fio-3.35 00:09:38.487 Starting 4 threads 00:09:39.874 00:09:39.874 job0: (groupid=0, jobs=1): err= 0: pid=3426991: Wed Nov 20 17:54:58 2024 00:09:39.874 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:09:39.874 slat (nsec): min=1193, max=4291.7k, avg=64160.16, stdev=207910.72 00:09:39.874 clat (usec): min=4979, max=14584, avg=8304.30, stdev=1391.13 00:09:39.874 lat (usec): min=4981, max=14593, avg=8368.46, stdev=1390.56 00:09:39.874 clat percentiles (usec): 00:09:39.874 | 1.00th=[ 6915], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7701], 00:09:39.874 | 30.00th=[ 7767], 40.00th=[ 7832], 50.00th=[ 7898], 60.00th=[ 7963], 00:09:39.874 | 70.00th=[ 8029], 80.00th=[ 8094], 90.00th=[10290], 95.00th=[12649], 00:09:39.874 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13435], 99.95th=[13435], 00:09:39.874 | 99.99th=[14615] 00:09:39.874 write: IOPS=7983, BW=31.2MiB/s (32.7MB/s)(31.3MiB/1003msec); 0 zone resets 00:09:39.874 slat (nsec): min=1655, max=1792.9k, avg=61100.71, stdev=190294.42 00:09:39.874 clat (usec): min=2667, max=14023, avg=7883.28, stdev=1325.30 00:09:39.874 lat (usec): min=3321, max=14024, avg=7944.38, stdev=1322.70 00:09:39.874 clat percentiles (usec): 00:09:39.874 | 1.00th=[ 6521], 5.00th=[ 7046], 10.00th=[ 7177], 20.00th=[ 7308], 00:09:39.874 | 30.00th=[ 7373], 40.00th=[ 7439], 50.00th=[ 7504], 60.00th=[ 7570], 00:09:39.874 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 9896], 95.00th=[11994], 00:09:39.874 | 99.00th=[12387], 99.50th=[12518], 99.90th=[13304], 99.95th=[13960], 00:09:39.874 | 99.99th=[13960] 00:09:39.874 bw ( KiB/s): min=31464, max=31576, per=25.86%, avg=31520.00, stdev=79.20, samples=2 00:09:39.874 iops : min= 7866, max= 7894, avg=7880.00, stdev=19.80, samples=2 00:09:39.874 lat (msec) : 4=0.17%, 10=89.35%, 20=10.48% 00:09:39.874 cpu : usr=1.90%, sys=6.29%, ctx=2162, majf=0, minf=1 00:09:39.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:39.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.874 issued rwts: total=7680,8007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.874 job1: (groupid=0, jobs=1): err= 0: pid=3426992: Wed Nov 20 17:54:58 2024 00:09:39.874 read: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec) 00:09:39.874 slat (nsec): min=1189, max=1383.5k, avg=63501.59, stdev=180551.85 00:09:39.874 clat (usec): min=6581, max=13461, avg=8205.89, stdev=1262.79 00:09:39.874 lat (usec): min=7236, max=13936, avg=8269.39, stdev=1260.44 00:09:39.874 clat percentiles (usec): 00:09:39.874 | 1.00th=[ 7111], 5.00th=[ 7439], 10.00th=[ 7570], 20.00th=[ 7701], 00:09:39.874 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 7898], 60.00th=[ 7963], 00:09:39.874 | 70.00th=[ 8029], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[12649], 00:09:39.874 | 99.00th=[13304], 99.50th=[13304], 99.90th=[13435], 99.95th=[13435], 00:09:39.874 | 99.99th=[13435] 00:09:39.874 write: IOPS=8149, BW=31.8MiB/s (33.4MB/s)(31.9MiB/1002msec); 0 zone resets 00:09:39.874 slat (nsec): min=1636, max=1148.3k, avg=60422.42, stdev=168381.91 00:09:39.874 clat (usec): min=1798, max=12811, avg=7806.99, stdev=1210.78 00:09:39.874 lat (usec): min=2280, max=13075, avg=7867.41, stdev=1207.63 00:09:39.874 clat percentiles (usec): 00:09:39.874 | 1.00th=[ 6456], 5.00th=[ 7046], 10.00th=[ 7177], 20.00th=[ 7308], 00:09:39.874 | 30.00th=[ 7373], 40.00th=[ 7439], 50.00th=[ 7504], 60.00th=[ 7570], 00:09:39.874 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8455], 95.00th=[11731], 00:09:39.874 | 99.00th=[12256], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:09:39.874 | 99.99th=[12780] 00:09:39.874 bw ( KiB/s): min=31536, max=32776, per=26.38%, avg=32156.00, stdev=876.81, samples=2 00:09:39.874 iops : min= 7884, max= 8194, avg=8039.00, stdev=219.20, samples=2 00:09:39.874 lat (msec) : 2=0.01%, 4=0.18%, 10=93.07%, 20=6.75% 00:09:39.874 cpu : usr=2.30%, sys=6.99%, ctx=2002, majf=0, minf=1 00:09:39.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:39.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.874 issued rwts: total=7680,8166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.874 job2: (groupid=0, jobs=1): err= 0: pid=3427007: Wed Nov 20 17:54:58 2024 00:09:39.874 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:09:39.874 slat (nsec): min=1214, max=2479.7k, avg=74691.63, stdev=265122.78 00:09:39.874 clat (usec): min=5110, max=13223, avg=9629.28, stdev=1046.11 00:09:39.874 lat (usec): min=5112, max=13224, avg=9703.97, stdev=1060.18 00:09:39.874 clat percentiles (usec): 00:09:39.874 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[ 9110], 00:09:39.874 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:09:39.874 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[12780], 00:09:39.874 | 99.00th=[13173], 99.50th=[13173], 99.90th=[13173], 99.95th=[13173], 00:09:39.875 | 99.99th=[13173] 00:09:39.875 write: IOPS=6696, BW=26.2MiB/s (27.4MB/s)(26.2MiB/1002msec); 0 zone resets 00:09:39.875 slat (nsec): min=1671, max=2419.4k, avg=72877.30, stdev=257653.34 00:09:39.875 clat (usec): min=866, max=12677, avg=9322.81, stdev=1094.28 00:09:39.875 lat (usec): min=1753, max=12865, avg=9395.69, stdev=1106.09 00:09:39.875 clat percentiles (usec): 00:09:39.875 | 1.00th=[ 6128], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8848], 00:09:39.875 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:09:39.875 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[10159], 95.00th=[11994], 00:09:39.875 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:09:39.875 | 99.99th=[12649] 00:09:39.875 bw ( KiB/s): min=26424, max=26824, per=21.84%, avg=26624.00, stdev=282.84, samples=2 00:09:39.875 iops : min= 6606, max= 6706, avg=6656.00, stdev=70.71, samples=2 00:09:39.875 lat (usec) : 1000=0.01% 00:09:39.875 lat (msec) : 2=0.16%, 4=0.24%, 10=87.89%, 20=11.70% 00:09:39.875 cpu : usr=1.00%, sys=4.90%, ctx=1190, majf=0, minf=1 00:09:39.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:39.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.875 issued rwts: total=6656,6710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.875 job3: (groupid=0, jobs=1): err= 0: pid=3427014: Wed Nov 20 17:54:58 2024 00:09:39.875 read: IOPS=7576, BW=29.6MiB/s (31.0MB/s)(29.7MiB/1003msec) 00:09:39.875 slat (nsec): min=1247, max=2599.2k, avg=65502.27, stdev=234168.97 00:09:39.875 clat (usec): min=2412, max=12461, avg=8495.59, stdev=1856.03 00:09:39.875 lat (usec): min=3481, max=12470, avg=8561.09, stdev=1875.42 00:09:39.875 clat percentiles (usec): 00:09:39.875 | 1.00th=[ 4359], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 7111], 00:09:39.875 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:09:39.875 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[ 9765], 95.00th=[10028], 00:09:39.875 | 99.00th=[10814], 99.50th=[11076], 99.90th=[11600], 99.95th=[11600], 00:09:39.875 | 99.99th=[12518] 00:09:39.875 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:09:39.875 slat (nsec): min=1682, max=2504.6k, avg=63205.77, stdev=234139.12 00:09:39.875 clat (usec): min=2657, max=11780, avg=8154.63, stdev=1924.80 00:09:39.875 lat (usec): min=2665, max=11791, avg=8217.84, stdev=1946.27 00:09:39.875 clat percentiles (usec): 00:09:39.875 | 1.00th=[ 3949], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 5538], 00:09:39.875 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:09:39.875 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[ 9503], 95.00th=[ 9634], 00:09:39.875 | 99.00th=[10683], 99.50th=[10945], 99.90th=[11338], 99.95th=[11600], 00:09:39.875 | 99.99th=[11731] 00:09:39.875 bw ( KiB/s): min=28672, max=32768, per=25.20%, avg=30720.00, stdev=2896.31, samples=2 00:09:39.875 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:09:39.875 lat (msec) : 4=0.71%, 10=95.67%, 20=3.62% 00:09:39.875 cpu : usr=2.59%, sys=4.89%, ctx=1314, majf=0, minf=2 00:09:39.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:39.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.875 issued rwts: total=7599,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.875 00:09:39.875 Run status group 0 (all jobs): 00:09:39.875 READ: bw=115MiB/s (121MB/s), 25.9MiB/s-29.9MiB/s (27.2MB/s-31.4MB/s), io=116MiB (121MB), run=1002-1003msec 00:09:39.875 WRITE: bw=119MiB/s (125MB/s), 26.2MiB/s-31.8MiB/s (27.4MB/s-33.4MB/s), io=119MiB (125MB), run=1002-1003msec 00:09:39.875 00:09:39.875 Disk stats (read/write): 00:09:39.875 nvme0n1: ios=6706/6659, merge=0/0, ticks=13180/12579, in_queue=25759, util=85.77% 00:09:39.875 nvme0n2: ios=6656/6665, merge=0/0, ticks=13066/12404, in_queue=25470, util=86.06% 00:09:39.875 nvme0n3: ios=5493/5632, merge=0/0, ticks=24317/24234, in_queue=48551, util=88.68% 00:09:39.875 nvme0n4: ios=6265/6656, merge=0/0, ticks=25420/25954, in_queue=51374, util=89.62% 00:09:39.875 17:54:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:39.875 17:54:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3427304 00:09:39.875 17:54:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:39.875 17:54:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:39.875 [global] 00:09:39.875 thread=1 00:09:39.875 invalidate=1 00:09:39.875 rw=read 00:09:39.875 time_based=1 00:09:39.875 runtime=10 00:09:39.875 ioengine=libaio 00:09:39.875 direct=1 00:09:39.875 bs=4096 00:09:39.875 iodepth=1 00:09:39.875 norandommap=1 00:09:39.875 numjobs=1 00:09:39.875 00:09:39.875 [job0] 00:09:39.875 filename=/dev/nvme0n1 00:09:39.875 [job1] 00:09:39.875 filename=/dev/nvme0n2 00:09:39.875 [job2] 00:09:39.875 filename=/dev/nvme0n3 00:09:39.875 [job3] 00:09:39.875 filename=/dev/nvme0n4 00:09:39.875 Could not set queue depth (nvme0n1) 00:09:39.875 Could not set queue depth (nvme0n2) 00:09:39.875 Could not set queue depth (nvme0n3) 00:09:39.875 Could not set queue depth (nvme0n4) 00:09:40.135 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.135 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.135 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.135 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.135 fio-3.35 00:09:40.135 Starting 4 threads 00:09:42.683 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:42.944 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41644032, buflen=4096 00:09:42.944 fio: pid=3427515, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:42.944 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:43.205 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=52039680, buflen=4096 00:09:43.205 fio: pid=3427511, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:43.205 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.205 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:43.205 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=544768, buflen=4096 00:09:43.205 fio: pid=3427495, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:43.205 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.205 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:43.467 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11624448, buflen=4096 00:09:43.467 fio: pid=3427499, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:43.467 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.467 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:43.467 00:09:43.467 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3427495: Wed Nov 20 17:55:01 2024 00:09:43.467 read: IOPS=5556, BW=21.7MiB/s (22.8MB/s)(64.5MiB/2973msec) 00:09:43.467 slat (usec): min=5, max=19965, avg=19.39, stdev=267.05 00:09:43.467 clat (usec): min=32, max=22342, avg=156.28, stdev=199.58 00:09:43.467 lat (usec): min=50, max=22348, avg=175.67, stdev=335.62 00:09:43.467 clat percentiles (usec): 00:09:43.467 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 58], 20.00th=[ 67], 00:09:43.467 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 98], 60.00th=[ 194], 00:09:43.467 | 70.00th=[ 239], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 326], 00:09:43.467 | 99.00th=[ 396], 99.50th=[ 416], 99.90th=[ 457], 99.95th=[ 474], 00:09:43.467 | 99.99th=[ 644] 00:09:43.467 bw ( KiB/s): min=16568, max=27056, per=29.62%, avg=21820.80, stdev=4010.85, samples=5 00:09:43.467 iops : min= 4142, max= 6764, avg=5455.20, stdev=1002.71, samples=5 00:09:43.467 lat (usec) : 50=1.55%, 100=49.51%, 250=24.03%, 500=24.89%, 750=0.01% 00:09:43.467 lat (msec) : 50=0.01% 00:09:43.467 cpu : usr=3.73%, sys=13.90%, ctx=16524, majf=0, minf=2 00:09:43.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.467 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.467 issued rwts: total=16518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.467 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3427499: Wed Nov 20 17:55:01 2024 00:09:43.467 read: IOPS=6039, BW=23.6MiB/s (24.7MB/s)(75.1MiB/3183msec) 00:09:43.467 slat (usec): min=5, max=15938, avg=18.34, stdev=248.22 00:09:43.467 clat (usec): min=39, max=22394, avg=143.38, stdev=246.99 00:09:43.467 lat (usec): min=49, max=22401, avg=161.71, stdev=353.15 00:09:43.467 clat percentiles (usec): 00:09:43.467 | 1.00th=[ 48], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 59], 00:09:43.467 | 30.00th=[ 67], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 108], 00:09:43.467 | 70.00th=[ 229], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 314], 00:09:43.467 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 449], 99.95th=[ 469], 00:09:43.467 | 99.99th=[22152] 00:09:43.467 bw ( KiB/s): min=16048, max=30104, per=31.63%, avg=23298.67, stdev=5563.26, samples=6 00:09:43.467 iops : min= 4012, max= 7526, avg=5824.67, stdev=1390.81, samples=6 00:09:43.467 lat (usec) : 50=4.08%, 100=53.56%, 250=20.69%, 500=21.65%, 750=0.01% 00:09:43.467 lat (msec) : 50=0.01% 00:09:43.467 cpu : usr=4.18%, sys=13.51%, ctx=19229, majf=0, minf=2 00:09:43.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.467 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.467 issued rwts: total=19223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.467 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3427511: Wed Nov 20 17:55:01 2024 00:09:43.467 read: IOPS=4549, BW=17.8MiB/s (18.6MB/s)(49.6MiB/2793msec) 00:09:43.467 slat (usec): min=5, max=14850, avg=19.50, stdev=168.26 00:09:43.467 clat (usec): min=44, max=1109, avg=195.14, stdev=95.65 00:09:43.467 lat (usec): min=58, max=15080, avg=214.64, stdev=195.89 00:09:43.467 clat percentiles (usec): 00:09:43.467 | 1.00th=[ 59], 5.00th=[ 72], 10.00th=[ 76], 20.00th=[ 84], 00:09:43.467 | 30.00th=[ 99], 40.00th=[ 153], 50.00th=[ 227], 60.00th=[ 245], 00:09:43.467 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 351], 00:09:43.467 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 474], 99.95th=[ 490], 00:09:43.467 | 99.99th=[ 660] 00:09:43.467 bw ( KiB/s): min=16216, max=18208, per=23.80%, avg=17531.20, stdev=770.26, samples=5 00:09:43.467 iops : min= 4054, max= 4552, avg=4382.80, stdev=192.56, samples=5 00:09:43.467 lat (usec) : 50=0.02%, 100=30.43%, 250=33.22%, 500=36.30%, 750=0.02% 00:09:43.467 lat (msec) : 2=0.01% 00:09:43.467 cpu : usr=4.08%, sys=12.25%, ctx=12709, majf=0, minf=1 00:09:43.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.467 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.467 issued rwts: total=12706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.467 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3427515: Wed Nov 20 17:55:01 2024 00:09:43.467 read: IOPS=3912, BW=15.3MiB/s (16.0MB/s)(39.7MiB/2599msec) 00:09:43.468 slat (nsec): min=6146, max=65654, avg=21470.21, stdev=11739.30 00:09:43.468 clat (usec): min=52, max=1107, avg=228.95, stdev=84.94 00:09:43.468 lat (usec): min=58, max=1114, avg=250.42, stdev=87.91 00:09:43.468 clat percentiles (usec): 00:09:43.468 | 1.00th=[ 58], 5.00th=[ 70], 10.00th=[ 86], 20.00th=[ 139], 00:09:43.468 | 30.00th=[ 221], 40.00th=[ 237], 50.00th=[ 249], 60.00th=[ 260], 00:09:43.468 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 322], 95.00th=[ 363], 00:09:43.468 | 99.00th=[ 408], 99.50th=[ 433], 99.90th=[ 486], 99.95th=[ 498], 00:09:43.468 | 99.99th=[ 832] 00:09:43.468 bw ( KiB/s): min=14584, max=17016, per=21.41%, avg=15766.40, stdev=946.99, samples=5 00:09:43.468 iops : min= 3646, max= 4254, avg=3941.60, stdev=236.75, samples=5 00:09:43.468 lat (usec) : 100=13.18%, 250=37.42%, 500=49.35%, 750=0.02%, 1000=0.01% 00:09:43.468 lat (msec) : 2=0.01% 00:09:43.468 cpu : usr=4.43%, sys=12.78%, ctx=10169, majf=0, minf=2 00:09:43.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.468 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.468 issued rwts: total=10168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.468 00:09:43.468 Run status group 0 (all jobs): 00:09:43.468 READ: bw=71.9MiB/s (75.4MB/s), 15.3MiB/s-23.6MiB/s (16.0MB/s-24.7MB/s), io=229MiB (240MB), run=2599-3183msec 00:09:43.468 00:09:43.468 Disk stats (read/write): 00:09:43.468 nvme0n1: ios=15436/0, merge=0/0, ticks=1680/0, in_queue=1680, util=92.89% 00:09:43.468 nvme0n2: ios=18159/0, merge=0/0, ticks=1830/0, in_queue=1830, util=93.41% 00:09:43.468 nvme0n3: ios=11312/0, merge=0/0, ticks=1641/0, in_queue=1641, util=96.11% 00:09:43.468 nvme0n4: ios=10167/0, merge=0/0, ticks=1498/0, in_queue=1498, util=96.47% 00:09:43.729 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.729 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:43.990 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.990 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:43.990 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.990 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:44.251 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:44.251 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:44.513 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:44.513 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3427304 00:09:44.513 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:44.513 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.900 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.900 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:45.900 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:45.900 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.900 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.900 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:45.900 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:45.900 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:45.900 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:45.900 nvmf hotplug test: fio failed as expected 00:09:45.900 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.901 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:45.901 rmmod nvme_rdma 00:09:46.161 rmmod nvme_fabrics 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3423788 ']' 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3423788 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3423788 ']' 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3423788 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3423788 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3423788' 00:09:46.161 killing process with pid 3423788 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3423788 00:09:46.161 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3423788 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:46.423 00:09:46.423 real 0m28.792s 00:09:46.423 user 2m42.954s 00:09:46.423 sys 0m11.036s 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 ************************************ 00:09:46.423 END TEST nvmf_fio_target 00:09:46.423 ************************************ 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 ************************************ 00:09:46.423 START TEST nvmf_bdevio 00:09:46.423 ************************************ 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:09:46.423 * Looking for test storage... 00:09:46.423 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:46.423 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:46.685 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.685 --rc genhtml_branch_coverage=1 00:09:46.685 --rc genhtml_function_coverage=1 00:09:46.685 --rc genhtml_legend=1 00:09:46.685 --rc geninfo_all_blocks=1 00:09:46.685 --rc geninfo_unexecuted_blocks=1 00:09:46.685 00:09:46.685 ' 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.685 --rc genhtml_branch_coverage=1 00:09:46.685 --rc genhtml_function_coverage=1 00:09:46.685 --rc genhtml_legend=1 00:09:46.685 --rc geninfo_all_blocks=1 00:09:46.685 --rc geninfo_unexecuted_blocks=1 00:09:46.685 00:09:46.685 ' 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.685 --rc genhtml_branch_coverage=1 00:09:46.685 --rc genhtml_function_coverage=1 00:09:46.685 --rc genhtml_legend=1 00:09:46.685 --rc geninfo_all_blocks=1 00:09:46.685 --rc geninfo_unexecuted_blocks=1 00:09:46.685 00:09:46.685 ' 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.685 --rc genhtml_branch_coverage=1 00:09:46.685 --rc genhtml_function_coverage=1 00:09:46.685 --rc genhtml_legend=1 00:09:46.685 --rc geninfo_all_blocks=1 00:09:46.685 --rc geninfo_unexecuted_blocks=1 00:09:46.685 00:09:46.685 ' 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.685 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.686 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.686 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.836 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.836 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.836 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.836 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:54.837 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:54.837 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:54.837 Found net devices under 0000:98:00.0: mlx_0_0 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:54.837 Found net devices under 0000:98:00.1: mlx_0_1 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:54.837 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:54.838 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:54.838 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:54.838 altname enp152s0f0np0 00:09:54.838 altname ens817f0np0 00:09:54.838 inet 192.168.100.8/24 scope global mlx_0_0 00:09:54.838 valid_lft forever preferred_lft forever 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:54.838 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:54.838 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:54.838 altname enp152s0f1np1 00:09:54.838 altname ens817f1np1 00:09:54.838 inet 192.168.100.9/24 scope global mlx_0_1 00:09:54.838 valid_lft forever preferred_lft forever 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:54.838 192.168.100.9' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:54.838 192.168.100.9' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:54.838 192.168.100.9' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.838 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.101 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3433205 00:09:55.101 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3433205 00:09:55.101 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:55.101 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3433205 ']' 00:09:55.101 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.101 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.101 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.101 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.101 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.101 [2024-11-20 17:55:13.420986] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:09:55.101 [2024-11-20 17:55:13.421036] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.101 [2024-11-20 17:55:13.516311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.101 [2024-11-20 17:55:13.559470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.101 [2024-11-20 17:55:13.559518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.101 [2024-11-20 17:55:13.559527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.101 [2024-11-20 17:55:13.559534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.101 [2024-11-20 17:55:13.559540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.101 [2024-11-20 17:55:13.561784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:55.101 [2024-11-20 17:55:13.561929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:55.101 [2024-11-20 17:55:13.562093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:55.101 [2024-11-20 17:55:13.562094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.047 [2024-11-20 17:55:14.326581] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x72f7a0/0x733c90) succeed. 00:09:56.047 [2024-11-20 17:55:14.342270] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x730e30/0x775330) succeed. 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.047 Malloc0 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.047 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.309 [2024-11-20 17:55:14.581692] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:56.309 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.309 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:56.309 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:56.309 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:56.309 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:56.309 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:56.309 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:56.309 { 00:09:56.309 "params": { 00:09:56.309 "name": "Nvme$subsystem", 00:09:56.309 "trtype": "$TEST_TRANSPORT", 00:09:56.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.309 "adrfam": "ipv4", 00:09:56.309 "trsvcid": "$NVMF_PORT", 00:09:56.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.309 "hdgst": ${hdgst:-false}, 00:09:56.310 "ddgst": ${ddgst:-false} 00:09:56.310 }, 00:09:56.310 "method": "bdev_nvme_attach_controller" 00:09:56.310 } 00:09:56.310 EOF 00:09:56.310 )") 00:09:56.310 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:56.310 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:56.310 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:56.310 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:56.310 "params": { 00:09:56.310 "name": "Nvme1", 00:09:56.310 "trtype": "rdma", 00:09:56.310 "traddr": "192.168.100.8", 00:09:56.310 "adrfam": "ipv4", 00:09:56.310 "trsvcid": "4420", 00:09:56.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.310 "hdgst": false, 00:09:56.310 "ddgst": false 00:09:56.310 }, 00:09:56.310 "method": "bdev_nvme_attach_controller" 00:09:56.310 }' 00:09:56.310 [2024-11-20 17:55:14.638395] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:09:56.310 [2024-11-20 17:55:14.638466] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433315 ] 00:09:56.310 [2024-11-20 17:55:14.725118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.310 [2024-11-20 17:55:14.769641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.310 [2024-11-20 17:55:14.769758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.310 [2024-11-20 17:55:14.769761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.572 I/O targets: 00:09:56.572 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:56.572 00:09:56.572 00:09:56.572 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.572 http://cunit.sourceforge.net/ 00:09:56.572 00:09:56.572 00:09:56.572 Suite: bdevio tests on: Nvme1n1 00:09:56.572 Test: blockdev write read block ...passed 00:09:56.572 Test: blockdev write zeroes read block ...passed 00:09:56.572 Test: blockdev write zeroes read no split ...passed 00:09:56.572 Test: blockdev write zeroes read split ...passed 00:09:56.572 Test: blockdev write zeroes read split partial ...passed 00:09:56.572 Test: blockdev reset ...[2024-11-20 17:55:14.969256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:56.572 [2024-11-20 17:55:14.998776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:09:56.572 [2024-11-20 17:55:15.029497] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:56.572 passed 00:09:56.572 Test: blockdev write read 8 blocks ...passed 00:09:56.572 Test: blockdev write read size > 128k ...passed 00:09:56.572 Test: blockdev write read invalid size ...passed 00:09:56.572 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:56.572 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:56.572 Test: blockdev write read max offset ...passed 00:09:56.572 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:56.572 Test: blockdev writev readv 8 blocks ...passed 00:09:56.572 Test: blockdev writev readv 30 x 1block ...passed 00:09:56.572 Test: blockdev writev readv block ...passed 00:09:56.572 Test: blockdev writev readv size > 128k ...passed 00:09:56.572 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:56.572 Test: blockdev comparev and writev ...[2024-11-20 17:55:15.035368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.572 [2024-11-20 17:55:15.035393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:56.572 [2024-11-20 17:55:15.035401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.572 [2024-11-20 17:55:15.035406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:56.572 [2024-11-20 17:55:15.035569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.572 [2024-11-20 17:55:15.035576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:56.572 [2024-11-20 17:55:15.035583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.572 [2024-11-20 17:55:15.035588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:56.572 [2024-11-20 17:55:15.035779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.572 [2024-11-20 17:55:15.035786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:56.572 [2024-11-20 17:55:15.035792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.572 [2024-11-20 17:55:15.035797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:56.572 [2024-11-20 17:55:15.035967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.572 [2024-11-20 17:55:15.035973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:56.572 [2024-11-20 17:55:15.035980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.572 [2024-11-20 17:55:15.035985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:56.572 passed 00:09:56.572 Test: blockdev nvme passthru rw ...passed 00:09:56.572 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:55:15.036571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:56.572 [2024-11-20 17:55:15.036583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:56.572 [2024-11-20 17:55:15.036628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:56.572 [2024-11-20 17:55:15.036633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:56.572 [2024-11-20 17:55:15.036675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:56.572 [2024-11-20 17:55:15.036680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:56.572 [2024-11-20 17:55:15.036727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:56.572 [2024-11-20 17:55:15.036733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:56.572 passed 00:09:56.572 Test: blockdev nvme admin passthru ...passed 00:09:56.572 Test: blockdev copy ...passed 00:09:56.572 00:09:56.572 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.572 suites 1 1 n/a 0 0 00:09:56.572 tests 23 23 23 0 0 00:09:56.572 asserts 152 152 152 0 n/a 00:09:56.572 00:09:56.572 Elapsed time = 0.196 seconds 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:56.834 rmmod nvme_rdma 00:09:56.834 rmmod nvme_fabrics 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3433205 ']' 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3433205 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3433205 ']' 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3433205 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3433205 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3433205' 00:09:56.834 killing process with pid 3433205 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3433205 00:09:56.834 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3433205 00:09:57.096 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.096 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:57.096 00:09:57.096 real 0m10.772s 00:09:57.096 user 0m11.456s 00:09:57.096 sys 0m6.855s 00:09:57.096 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.096 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.096 ************************************ 00:09:57.096 END TEST nvmf_bdevio 00:09:57.096 ************************************ 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:57.358 00:09:57.358 real 4m34.030s 00:09:57.358 user 11m50.908s 00:09:57.358 sys 1m43.572s 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.358 ************************************ 00:09:57.358 END TEST nvmf_target_core 00:09:57.358 ************************************ 00:09:57.358 17:55:15 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:09:57.358 17:55:15 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.358 17:55:15 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.358 17:55:15 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:57.358 ************************************ 00:09:57.358 START TEST nvmf_target_extra 00:09:57.358 ************************************ 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:09:57.358 * Looking for test storage... 00:09:57.358 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.358 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.359 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:57.359 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.622 --rc genhtml_branch_coverage=1 00:09:57.622 --rc genhtml_function_coverage=1 00:09:57.622 --rc genhtml_legend=1 00:09:57.622 --rc geninfo_all_blocks=1 00:09:57.622 --rc geninfo_unexecuted_blocks=1 00:09:57.622 00:09:57.622 ' 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.622 --rc genhtml_branch_coverage=1 00:09:57.622 --rc genhtml_function_coverage=1 00:09:57.622 --rc genhtml_legend=1 00:09:57.622 --rc geninfo_all_blocks=1 00:09:57.622 --rc geninfo_unexecuted_blocks=1 00:09:57.622 00:09:57.622 ' 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.622 --rc genhtml_branch_coverage=1 00:09:57.622 --rc genhtml_function_coverage=1 00:09:57.622 --rc genhtml_legend=1 00:09:57.622 --rc geninfo_all_blocks=1 00:09:57.622 --rc geninfo_unexecuted_blocks=1 00:09:57.622 00:09:57.622 ' 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.622 --rc genhtml_branch_coverage=1 00:09:57.622 --rc genhtml_function_coverage=1 00:09:57.622 --rc genhtml_legend=1 00:09:57.622 --rc geninfo_all_blocks=1 00:09:57.622 --rc geninfo_unexecuted_blocks=1 00:09:57.622 00:09:57.622 ' 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.622 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:57.623 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:57.623 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:57.623 17:55:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:09:57.623 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.623 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.623 17:55:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:57.623 ************************************ 00:09:57.623 START TEST nvmf_example 00:09:57.623 ************************************ 00:09:57.623 17:55:15 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:09:57.623 * Looking for test storage... 00:09:57.623 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.623 --rc genhtml_branch_coverage=1 00:09:57.623 --rc genhtml_function_coverage=1 00:09:57.623 --rc genhtml_legend=1 00:09:57.623 --rc geninfo_all_blocks=1 00:09:57.623 --rc geninfo_unexecuted_blocks=1 00:09:57.623 00:09:57.623 ' 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.623 --rc genhtml_branch_coverage=1 00:09:57.623 --rc genhtml_function_coverage=1 00:09:57.623 --rc genhtml_legend=1 00:09:57.623 --rc geninfo_all_blocks=1 00:09:57.623 --rc geninfo_unexecuted_blocks=1 00:09:57.623 00:09:57.623 ' 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.623 --rc genhtml_branch_coverage=1 00:09:57.623 --rc genhtml_function_coverage=1 00:09:57.623 --rc genhtml_legend=1 00:09:57.623 --rc geninfo_all_blocks=1 00:09:57.623 --rc geninfo_unexecuted_blocks=1 00:09:57.623 00:09:57.623 ' 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.623 --rc genhtml_branch_coverage=1 00:09:57.623 --rc genhtml_function_coverage=1 00:09:57.623 --rc genhtml_legend=1 00:09:57.623 --rc geninfo_all_blocks=1 00:09:57.623 --rc geninfo_unexecuted_blocks=1 00:09:57.623 00:09:57.623 ' 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.623 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.885 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.886 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.886 17:55:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.032 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:06.032 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:06.033 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:06.033 Found net devices under 0000:98:00.0: mlx_0_0 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:06.033 Found net devices under 0000:98:00.1: mlx_0_1 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:06.033 17:55:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:06.033 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:06.033 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:10:06.033 altname enp152s0f0np0 00:10:06.033 altname ens817f0np0 00:10:06.033 inet 192.168.100.8/24 scope global mlx_0_0 00:10:06.033 valid_lft forever preferred_lft forever 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:06.033 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:06.033 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:10:06.033 altname enp152s0f1np1 00:10:06.033 altname ens817f1np1 00:10:06.033 inet 192.168.100.9/24 scope global mlx_0_1 00:10:06.033 valid_lft forever preferred_lft forever 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.033 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:06.034 192.168.100.9' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:06.034 192.168.100.9' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:06.034 192.168.100.9' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3437981 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3437981 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3437981 ']' 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.034 17:55:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.606 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.606 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:06.606 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:06.606 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.606 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.606 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:06.606 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.606 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.866 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:06.867 17:55:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:19.109 Initializing NVMe Controllers 00:10:19.109 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:19.109 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:19.109 Initialization complete. Launching workers. 00:10:19.109 ======================================================== 00:10:19.109 Latency(us) 00:10:19.109 Device Information : IOPS MiB/s Average min max 00:10:19.109 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26275.33 102.64 2435.03 706.56 18048.99 00:10:19.109 ======================================================== 00:10:19.109 Total : 26275.33 102.64 2435.03 706.56 18048.99 00:10:19.109 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:19.109 rmmod nvme_rdma 00:10:19.109 rmmod nvme_fabrics 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3437981 ']' 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3437981 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3437981 ']' 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3437981 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3437981 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3437981' 00:10:19.109 killing process with pid 3437981 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3437981 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3437981 00:10:19.109 nvmf threads initialize successfully 00:10:19.109 bdev subsystem init successfully 00:10:19.109 created a nvmf target service 00:10:19.109 create targets's poll groups done 00:10:19.109 all subsystems of target started 00:10:19.109 nvmf target is running 00:10:19.109 all subsystems of target stopped 00:10:19.109 destroy targets's poll groups done 00:10:19.109 destroyed the nvmf target service 00:10:19.109 bdev subsystem finish successfully 00:10:19.109 nvmf threads destroy successfully 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.109 17:55:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.109 00:10:19.109 real 0m21.069s 00:10:19.109 user 0m52.676s 00:10:19.109 sys 0m6.570s 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.109 ************************************ 00:10:19.109 END TEST nvmf_example 00:10:19.109 ************************************ 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:19.109 ************************************ 00:10:19.109 START TEST nvmf_filesystem 00:10:19.109 ************************************ 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:19.109 * Looking for test storage... 00:10:19.109 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.109 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:19.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.110 --rc genhtml_branch_coverage=1 00:10:19.110 --rc genhtml_function_coverage=1 00:10:19.110 --rc genhtml_legend=1 00:10:19.110 --rc geninfo_all_blocks=1 00:10:19.110 --rc geninfo_unexecuted_blocks=1 00:10:19.110 00:10:19.110 ' 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:19.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.110 --rc genhtml_branch_coverage=1 00:10:19.110 --rc genhtml_function_coverage=1 00:10:19.110 --rc genhtml_legend=1 00:10:19.110 --rc geninfo_all_blocks=1 00:10:19.110 --rc geninfo_unexecuted_blocks=1 00:10:19.110 00:10:19.110 ' 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:19.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.110 --rc genhtml_branch_coverage=1 00:10:19.110 --rc genhtml_function_coverage=1 00:10:19.110 --rc genhtml_legend=1 00:10:19.110 --rc geninfo_all_blocks=1 00:10:19.110 --rc geninfo_unexecuted_blocks=1 00:10:19.110 00:10:19.110 ' 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:19.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.110 --rc genhtml_branch_coverage=1 00:10:19.110 --rc genhtml_function_coverage=1 00:10:19.110 --rc genhtml_legend=1 00:10:19.110 --rc geninfo_all_blocks=1 00:10:19.110 --rc geninfo_unexecuted_blocks=1 00:10:19.110 00:10:19.110 ' 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:19.110 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:19.111 #define SPDK_CONFIG_H 00:10:19.111 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:19.111 #define SPDK_CONFIG_APPS 1 00:10:19.111 #define SPDK_CONFIG_ARCH native 00:10:19.111 #undef SPDK_CONFIG_ASAN 00:10:19.111 #undef SPDK_CONFIG_AVAHI 00:10:19.111 #undef SPDK_CONFIG_CET 00:10:19.111 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:19.111 #define SPDK_CONFIG_COVERAGE 1 00:10:19.111 #define SPDK_CONFIG_CROSS_PREFIX 00:10:19.111 #undef SPDK_CONFIG_CRYPTO 00:10:19.111 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:19.111 #undef SPDK_CONFIG_CUSTOMOCF 00:10:19.111 #undef SPDK_CONFIG_DAOS 00:10:19.111 #define SPDK_CONFIG_DAOS_DIR 00:10:19.111 #define SPDK_CONFIG_DEBUG 1 00:10:19.111 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:19.111 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:10:19.111 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:19.111 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:19.111 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:19.111 #undef SPDK_CONFIG_DPDK_UADK 00:10:19.111 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:19.111 #define SPDK_CONFIG_EXAMPLES 1 00:10:19.111 #undef SPDK_CONFIG_FC 00:10:19.111 #define SPDK_CONFIG_FC_PATH 00:10:19.111 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:19.111 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:19.111 #define SPDK_CONFIG_FSDEV 1 00:10:19.111 #undef SPDK_CONFIG_FUSE 00:10:19.111 #undef SPDK_CONFIG_FUZZER 00:10:19.111 #define SPDK_CONFIG_FUZZER_LIB 00:10:19.111 #undef SPDK_CONFIG_GOLANG 00:10:19.111 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:19.111 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:19.111 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:19.111 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:19.111 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:19.111 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:19.111 #undef SPDK_CONFIG_HAVE_LZ4 00:10:19.111 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:19.111 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:19.111 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:19.111 #define SPDK_CONFIG_IDXD 1 00:10:19.111 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:19.111 #undef SPDK_CONFIG_IPSEC_MB 00:10:19.111 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:19.111 #define SPDK_CONFIG_ISAL 1 00:10:19.111 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:19.111 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:19.111 #define SPDK_CONFIG_LIBDIR 00:10:19.111 #undef SPDK_CONFIG_LTO 00:10:19.111 #define SPDK_CONFIG_MAX_LCORES 128 00:10:19.111 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:19.111 #define SPDK_CONFIG_NVME_CUSE 1 00:10:19.111 #undef SPDK_CONFIG_OCF 00:10:19.111 #define SPDK_CONFIG_OCF_PATH 00:10:19.111 #define SPDK_CONFIG_OPENSSL_PATH 00:10:19.111 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:19.111 #define SPDK_CONFIG_PGO_DIR 00:10:19.111 #undef SPDK_CONFIG_PGO_USE 00:10:19.111 #define SPDK_CONFIG_PREFIX /usr/local 00:10:19.111 #undef SPDK_CONFIG_RAID5F 00:10:19.111 #undef SPDK_CONFIG_RBD 00:10:19.111 #define SPDK_CONFIG_RDMA 1 00:10:19.111 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:19.111 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:19.111 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:19.111 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:19.111 #define SPDK_CONFIG_SHARED 1 00:10:19.111 #undef SPDK_CONFIG_SMA 00:10:19.111 #define SPDK_CONFIG_TESTS 1 00:10:19.111 #undef SPDK_CONFIG_TSAN 00:10:19.111 #define SPDK_CONFIG_UBLK 1 00:10:19.111 #define SPDK_CONFIG_UBSAN 1 00:10:19.111 #undef SPDK_CONFIG_UNIT_TESTS 00:10:19.111 #undef SPDK_CONFIG_URING 00:10:19.111 #define SPDK_CONFIG_URING_PATH 00:10:19.111 #undef SPDK_CONFIG_URING_ZNS 00:10:19.111 #undef SPDK_CONFIG_USDT 00:10:19.111 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:19.111 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:19.111 #undef SPDK_CONFIG_VFIO_USER 00:10:19.111 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:19.111 #define SPDK_CONFIG_VHOST 1 00:10:19.111 #define SPDK_CONFIG_VIRTIO 1 00:10:19.111 #undef SPDK_CONFIG_VTUNE 00:10:19.111 #define SPDK_CONFIG_VTUNE_DIR 00:10:19.111 #define SPDK_CONFIG_WERROR 1 00:10:19.111 #define SPDK_CONFIG_WPDK_DIR 00:10:19.111 #undef SPDK_CONFIG_XNVME 00:10:19.111 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.111 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:19.112 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:19.113 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3440594 ]] 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3440594 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.g5JGk0 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.g5JGk0/tests/target /tmp/spdk.g5JGk0 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:19.114 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122320334848 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356550144 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7036215296 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64663478272 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=14794752 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847685120 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23625728 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=175104 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=328704 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677482496 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=794624 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:19.115 * Looking for test storage... 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122320334848 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9250807808 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:19.115 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:19.115 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:19.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.116 --rc genhtml_branch_coverage=1 00:10:19.116 --rc genhtml_function_coverage=1 00:10:19.116 --rc genhtml_legend=1 00:10:19.116 --rc geninfo_all_blocks=1 00:10:19.116 --rc geninfo_unexecuted_blocks=1 00:10:19.116 00:10:19.116 ' 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:19.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.116 --rc genhtml_branch_coverage=1 00:10:19.116 --rc genhtml_function_coverage=1 00:10:19.116 --rc genhtml_legend=1 00:10:19.116 --rc geninfo_all_blocks=1 00:10:19.116 --rc geninfo_unexecuted_blocks=1 00:10:19.116 00:10:19.116 ' 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:19.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.116 --rc genhtml_branch_coverage=1 00:10:19.116 --rc genhtml_function_coverage=1 00:10:19.116 --rc genhtml_legend=1 00:10:19.116 --rc geninfo_all_blocks=1 00:10:19.116 --rc geninfo_unexecuted_blocks=1 00:10:19.116 00:10:19.116 ' 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:19.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.116 --rc genhtml_branch_coverage=1 00:10:19.116 --rc genhtml_function_coverage=1 00:10:19.116 --rc genhtml_legend=1 00:10:19.116 --rc geninfo_all_blocks=1 00:10:19.116 --rc geninfo_unexecuted_blocks=1 00:10:19.116 00:10:19.116 ' 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.116 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.116 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.117 17:55:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.433 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:27.434 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:27.434 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:27.434 Found net devices under 0000:98:00.0: mlx_0_0 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:27.434 Found net devices under 0000:98:00.1: mlx_0_1 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:27.434 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:27.434 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:10:27.434 altname enp152s0f0np0 00:10:27.434 altname ens817f0np0 00:10:27.434 inet 192.168.100.8/24 scope global mlx_0_0 00:10:27.434 valid_lft forever preferred_lft forever 00:10:27.434 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:27.435 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:27.435 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:10:27.435 altname enp152s0f1np1 00:10:27.435 altname ens817f1np1 00:10:27.435 inet 192.168.100.9/24 scope global mlx_0_1 00:10:27.435 valid_lft forever preferred_lft forever 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:27.435 192.168.100.9' 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:27.435 192.168.100.9' 00:10:27.435 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:27.698 192.168.100.9' 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.698 17:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.698 ************************************ 00:10:27.698 START TEST nvmf_filesystem_no_in_capsule 00:10:27.698 ************************************ 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3445056 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3445056 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3445056 ']' 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.698 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.698 [2024-11-20 17:55:46.087086] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:10:27.698 [2024-11-20 17:55:46.087154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.698 [2024-11-20 17:55:46.171280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.698 [2024-11-20 17:55:46.212571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.698 [2024-11-20 17:55:46.212611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.698 [2024-11-20 17:55:46.212620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.698 [2024-11-20 17:55:46.212627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.698 [2024-11-20 17:55:46.212633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.698 [2024-11-20 17:55:46.214266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.698 [2024-11-20 17:55:46.214385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.698 [2024-11-20 17:55:46.214543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.698 [2024-11-20 17:55:46.214543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.639 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.639 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:28.639 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:28.639 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:28.639 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.639 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.639 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:28.639 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:28.639 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.639 17:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.639 [2024-11-20 17:55:46.938143] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:28.639 [2024-11-20 17:55:46.970047] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e23f30/0x1e28420) succeed. 00:10:28.639 [2024-11-20 17:55:46.984544] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e255c0/0x1e69ac0) succeed. 00:10:28.639 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.639 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:28.639 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.639 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.900 Malloc1 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.900 [2024-11-20 17:55:47.228917] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:28.900 { 00:10:28.900 "name": "Malloc1", 00:10:28.900 "aliases": [ 00:10:28.900 "735e751d-b5fc-46d0-bf53-a59c9cfa92f0" 00:10:28.900 ], 00:10:28.900 "product_name": "Malloc disk", 00:10:28.900 "block_size": 512, 00:10:28.900 "num_blocks": 1048576, 00:10:28.900 "uuid": "735e751d-b5fc-46d0-bf53-a59c9cfa92f0", 00:10:28.900 "assigned_rate_limits": { 00:10:28.900 "rw_ios_per_sec": 0, 00:10:28.900 "rw_mbytes_per_sec": 0, 00:10:28.900 "r_mbytes_per_sec": 0, 00:10:28.900 "w_mbytes_per_sec": 0 00:10:28.900 }, 00:10:28.900 "claimed": true, 00:10:28.900 "claim_type": "exclusive_write", 00:10:28.900 "zoned": false, 00:10:28.900 "supported_io_types": { 00:10:28.900 "read": true, 00:10:28.900 "write": true, 00:10:28.900 "unmap": true, 00:10:28.900 "flush": true, 00:10:28.900 "reset": true, 00:10:28.900 "nvme_admin": false, 00:10:28.900 "nvme_io": false, 00:10:28.900 "nvme_io_md": false, 00:10:28.900 "write_zeroes": true, 00:10:28.900 "zcopy": true, 00:10:28.900 "get_zone_info": false, 00:10:28.900 "zone_management": false, 00:10:28.900 "zone_append": false, 00:10:28.900 "compare": false, 00:10:28.900 "compare_and_write": false, 00:10:28.900 "abort": true, 00:10:28.900 "seek_hole": false, 00:10:28.900 "seek_data": false, 00:10:28.900 "copy": true, 00:10:28.900 "nvme_iov_md": false 00:10:28.900 }, 00:10:28.900 "memory_domains": [ 00:10:28.900 { 00:10:28.900 "dma_device_id": "system", 00:10:28.900 "dma_device_type": 1 00:10:28.900 }, 00:10:28.900 { 00:10:28.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.900 "dma_device_type": 2 00:10:28.900 } 00:10:28.900 ], 00:10:28.900 "driver_specific": {} 00:10:28.900 } 00:10:28.900 ]' 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:28.900 17:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:30.824 17:55:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.824 17:55:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:30.824 17:55:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.824 17:55:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:30.824 17:55:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:32.744 17:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:32.744 17:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.684 ************************************ 00:10:33.684 START TEST filesystem_ext4 00:10:33.684 ************************************ 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:33.684 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:33.684 mke2fs 1.47.0 (5-Feb-2023) 00:10:33.944 Discarding device blocks: 0/522240 done 00:10:33.944 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:33.944 Filesystem UUID: 9cea55d8-7c24-45d4-aca7-7e206cbc0951 00:10:33.944 Superblock backups stored on blocks: 00:10:33.944 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:33.944 00:10:33.944 Allocating group tables: 0/64 done 00:10:33.944 Writing inode tables: 0/64 done 00:10:33.944 Creating journal (8192 blocks): done 00:10:33.944 Writing superblocks and filesystem accounting information: 0/64 done 00:10:33.944 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3445056 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:33.944 00:10:33.944 real 0m0.190s 00:10:33.944 user 0m0.027s 00:10:33.944 sys 0m0.075s 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:33.944 ************************************ 00:10:33.944 END TEST filesystem_ext4 00:10:33.944 ************************************ 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.944 ************************************ 00:10:33.944 START TEST filesystem_btrfs 00:10:33.944 ************************************ 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:33.944 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:34.204 btrfs-progs v6.8.1 00:10:34.204 See https://btrfs.readthedocs.io for more information. 00:10:34.204 00:10:34.204 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:34.204 NOTE: several default settings have changed in version 5.15, please make sure 00:10:34.204 this does not affect your deployments: 00:10:34.204 - DUP for metadata (-m dup) 00:10:34.204 - enabled no-holes (-O no-holes) 00:10:34.204 - enabled free-space-tree (-R free-space-tree) 00:10:34.204 00:10:34.204 Label: (null) 00:10:34.204 UUID: d1f0029a-57a6-4d72-a1fb-013419ba77e4 00:10:34.204 Node size: 16384 00:10:34.204 Sector size: 4096 (CPU page size: 4096) 00:10:34.204 Filesystem size: 510.00MiB 00:10:34.204 Block group profiles: 00:10:34.204 Data: single 8.00MiB 00:10:34.204 Metadata: DUP 32.00MiB 00:10:34.204 System: DUP 8.00MiB 00:10:34.204 SSD detected: yes 00:10:34.204 Zoned device: no 00:10:34.204 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:34.204 Checksum: crc32c 00:10:34.204 Number of devices: 1 00:10:34.204 Devices: 00:10:34.204 ID SIZE PATH 00:10:34.204 1 510.00MiB /dev/nvme0n1p1 00:10:34.204 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3445056 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.204 00:10:34.204 real 0m0.208s 00:10:34.204 user 0m0.023s 00:10:34.204 sys 0m0.123s 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.204 ************************************ 00:10:34.204 END TEST filesystem_btrfs 00:10:34.204 ************************************ 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.204 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.465 ************************************ 00:10:34.465 START TEST filesystem_xfs 00:10:34.465 ************************************ 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:34.465 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:34.465 = sectsz=512 attr=2, projid32bit=1 00:10:34.465 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:34.465 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:34.465 data = bsize=4096 blocks=130560, imaxpct=25 00:10:34.465 = sunit=0 swidth=0 blks 00:10:34.465 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:34.465 log =internal log bsize=4096 blocks=16384, version=2 00:10:34.465 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:34.465 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:34.465 Discarding blocks...Done. 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3445056 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.465 00:10:34.465 real 0m0.180s 00:10:34.465 user 0m0.024s 00:10:34.465 sys 0m0.077s 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.465 ************************************ 00:10:34.465 END TEST filesystem_xfs 00:10:34.465 ************************************ 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:34.465 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:34.725 17:55:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3445056 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3445056 ']' 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3445056 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3445056 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3445056' 00:10:36.113 killing process with pid 3445056 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3445056 00:10:36.113 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3445056 00:10:36.375 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:36.375 00:10:36.375 real 0m8.656s 00:10:36.375 user 0m34.006s 00:10:36.375 sys 0m1.106s 00:10:36.375 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.375 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.375 ************************************ 00:10:36.375 END TEST nvmf_filesystem_no_in_capsule 00:10:36.375 ************************************ 00:10:36.375 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:36.375 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.375 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.376 ************************************ 00:10:36.376 START TEST nvmf_filesystem_in_capsule 00:10:36.376 ************************************ 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3446994 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3446994 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3446994 ']' 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.376 17:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.376 [2024-11-20 17:55:54.821521] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:10:36.376 [2024-11-20 17:55:54.821571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.376 [2024-11-20 17:55:54.903658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.637 [2024-11-20 17:55:54.943662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.637 [2024-11-20 17:55:54.943697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.637 [2024-11-20 17:55:54.943706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.637 [2024-11-20 17:55:54.943712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.637 [2024-11-20 17:55:54.943718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.637 [2024-11-20 17:55:54.945565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.637 [2024-11-20 17:55:54.945682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.637 [2024-11-20 17:55:54.945839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.637 [2024-11-20 17:55:54.945840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.208 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.208 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:37.208 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.208 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.208 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.208 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.208 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:37.208 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:10:37.208 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.208 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.208 [2024-11-20 17:55:55.700844] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d3ef30/0x1d43420) succeed. 00:10:37.208 [2024-11-20 17:55:55.715548] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d405c0/0x1d84ac0) succeed. 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.470 Malloc1 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.470 [2024-11-20 17:55:55.960960] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:37.470 { 00:10:37.470 "name": "Malloc1", 00:10:37.470 "aliases": [ 00:10:37.470 "1c04baa8-2263-40d3-8118-800ac4e3db23" 00:10:37.470 ], 00:10:37.470 "product_name": "Malloc disk", 00:10:37.470 "block_size": 512, 00:10:37.470 "num_blocks": 1048576, 00:10:37.470 "uuid": "1c04baa8-2263-40d3-8118-800ac4e3db23", 00:10:37.470 "assigned_rate_limits": { 00:10:37.470 "rw_ios_per_sec": 0, 00:10:37.470 "rw_mbytes_per_sec": 0, 00:10:37.470 "r_mbytes_per_sec": 0, 00:10:37.470 "w_mbytes_per_sec": 0 00:10:37.470 }, 00:10:37.470 "claimed": true, 00:10:37.470 "claim_type": "exclusive_write", 00:10:37.470 "zoned": false, 00:10:37.470 "supported_io_types": { 00:10:37.470 "read": true, 00:10:37.470 "write": true, 00:10:37.470 "unmap": true, 00:10:37.470 "flush": true, 00:10:37.470 "reset": true, 00:10:37.470 "nvme_admin": false, 00:10:37.470 "nvme_io": false, 00:10:37.470 "nvme_io_md": false, 00:10:37.470 "write_zeroes": true, 00:10:37.470 "zcopy": true, 00:10:37.470 "get_zone_info": false, 00:10:37.470 "zone_management": false, 00:10:37.470 "zone_append": false, 00:10:37.470 "compare": false, 00:10:37.470 "compare_and_write": false, 00:10:37.470 "abort": true, 00:10:37.470 "seek_hole": false, 00:10:37.470 "seek_data": false, 00:10:37.470 "copy": true, 00:10:37.470 "nvme_iov_md": false 00:10:37.470 }, 00:10:37.470 "memory_domains": [ 00:10:37.470 { 00:10:37.470 "dma_device_id": "system", 00:10:37.470 "dma_device_type": 1 00:10:37.470 }, 00:10:37.470 { 00:10:37.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.470 "dma_device_type": 2 00:10:37.470 } 00:10:37.470 ], 00:10:37.470 "driver_specific": {} 00:10:37.470 } 00:10:37.470 ]' 00:10:37.470 17:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:37.731 17:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:37.731 17:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:37.731 17:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:37.732 17:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:37.732 17:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:37.732 17:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:37.732 17:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:39.116 17:55:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.116 17:55:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:39.116 17:55:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.116 17:55:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:39.116 17:55:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:41.028 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:41.288 17:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.231 ************************************ 00:10:42.231 START TEST filesystem_in_capsule_ext4 00:10:42.231 ************************************ 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:42.231 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:42.231 mke2fs 1.47.0 (5-Feb-2023) 00:10:42.492 Discarding device blocks: 0/522240 done 00:10:42.492 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:42.492 Filesystem UUID: c3d1ee41-d0a6-4f6f-9b8e-227d86496bfc 00:10:42.492 Superblock backups stored on blocks: 00:10:42.492 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:42.492 00:10:42.492 Allocating group tables: 0/64 done 00:10:42.492 Writing inode tables: 0/64 done 00:10:42.492 Creating journal (8192 blocks): done 00:10:42.492 Writing superblocks and filesystem accounting information: 0/64 done 00:10:42.492 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3446994 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.492 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.493 00:10:42.493 real 0m0.173s 00:10:42.493 user 0m0.019s 00:10:42.493 sys 0m0.077s 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:42.493 ************************************ 00:10:42.493 END TEST filesystem_in_capsule_ext4 00:10:42.493 ************************************ 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.493 ************************************ 00:10:42.493 START TEST filesystem_in_capsule_btrfs 00:10:42.493 ************************************ 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:42.493 17:56:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:42.754 btrfs-progs v6.8.1 00:10:42.754 See https://btrfs.readthedocs.io for more information. 00:10:42.754 00:10:42.754 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:42.754 NOTE: several default settings have changed in version 5.15, please make sure 00:10:42.754 this does not affect your deployments: 00:10:42.754 - DUP for metadata (-m dup) 00:10:42.754 - enabled no-holes (-O no-holes) 00:10:42.754 - enabled free-space-tree (-R free-space-tree) 00:10:42.754 00:10:42.754 Label: (null) 00:10:42.754 UUID: af33ba1b-e932-4375-b302-86868a68b27d 00:10:42.754 Node size: 16384 00:10:42.754 Sector size: 4096 (CPU page size: 4096) 00:10:42.754 Filesystem size: 510.00MiB 00:10:42.754 Block group profiles: 00:10:42.754 Data: single 8.00MiB 00:10:42.754 Metadata: DUP 32.00MiB 00:10:42.754 System: DUP 8.00MiB 00:10:42.754 SSD detected: yes 00:10:42.754 Zoned device: no 00:10:42.754 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:42.754 Checksum: crc32c 00:10:42.754 Number of devices: 1 00:10:42.754 Devices: 00:10:42.754 ID SIZE PATH 00:10:42.754 1 510.00MiB /dev/nvme0n1p1 00:10:42.754 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3446994 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.754 00:10:42.754 real 0m0.210s 00:10:42.754 user 0m0.031s 00:10:42.754 sys 0m0.119s 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:42.754 ************************************ 00:10:42.754 END TEST filesystem_in_capsule_btrfs 00:10:42.754 ************************************ 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.754 ************************************ 00:10:42.754 START TEST filesystem_in_capsule_xfs 00:10:42.754 ************************************ 00:10:42.754 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:43.015 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:43.015 = sectsz=512 attr=2, projid32bit=1 00:10:43.015 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:43.015 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:43.015 data = bsize=4096 blocks=130560, imaxpct=25 00:10:43.015 = sunit=0 swidth=0 blks 00:10:43.015 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:43.015 log =internal log bsize=4096 blocks=16384, version=2 00:10:43.015 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:43.015 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.015 Discarding blocks...Done. 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3446994 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.015 00:10:43.015 real 0m0.173s 00:10:43.015 user 0m0.025s 00:10:43.015 sys 0m0.075s 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:43.015 ************************************ 00:10:43.015 END TEST filesystem_in_capsule_xfs 00:10:43.015 ************************************ 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:43.015 17:56:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:44.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3446994 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3446994 ']' 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3446994 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.401 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446994 00:10:44.662 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.662 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.662 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446994' 00:10:44.662 killing process with pid 3446994 00:10:44.662 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3446994 00:10:44.662 17:56:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3446994 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:44.923 00:10:44.923 real 0m8.514s 00:10:44.923 user 0m33.392s 00:10:44.923 sys 0m1.145s 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.923 ************************************ 00:10:44.923 END TEST nvmf_filesystem_in_capsule 00:10:44.923 ************************************ 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:44.923 rmmod nvme_rdma 00:10:44.923 rmmod nvme_fabrics 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:44.923 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:44.924 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:44.924 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:44.924 00:10:44.924 real 0m26.254s 00:10:44.924 user 1m10.169s 00:10:44.924 sys 0m8.689s 00:10:44.924 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.924 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.924 ************************************ 00:10:44.924 END TEST nvmf_filesystem 00:10:44.924 ************************************ 00:10:44.924 17:56:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:44.924 17:56:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.924 17:56:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.924 17:56:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.924 ************************************ 00:10:44.924 START TEST nvmf_target_discovery 00:10:44.924 ************************************ 00:10:44.924 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:45.185 * Looking for test storage... 00:10:45.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:45.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.186 --rc genhtml_branch_coverage=1 00:10:45.186 --rc genhtml_function_coverage=1 00:10:45.186 --rc genhtml_legend=1 00:10:45.186 --rc geninfo_all_blocks=1 00:10:45.186 --rc geninfo_unexecuted_blocks=1 00:10:45.186 00:10:45.186 ' 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:45.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.186 --rc genhtml_branch_coverage=1 00:10:45.186 --rc genhtml_function_coverage=1 00:10:45.186 --rc genhtml_legend=1 00:10:45.186 --rc geninfo_all_blocks=1 00:10:45.186 --rc geninfo_unexecuted_blocks=1 00:10:45.186 00:10:45.186 ' 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:45.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.186 --rc genhtml_branch_coverage=1 00:10:45.186 --rc genhtml_function_coverage=1 00:10:45.186 --rc genhtml_legend=1 00:10:45.186 --rc geninfo_all_blocks=1 00:10:45.186 --rc geninfo_unexecuted_blocks=1 00:10:45.186 00:10:45.186 ' 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:45.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.186 --rc genhtml_branch_coverage=1 00:10:45.186 --rc genhtml_function_coverage=1 00:10:45.186 --rc genhtml_legend=1 00:10:45.186 --rc geninfo_all_blocks=1 00:10:45.186 --rc geninfo_unexecuted_blocks=1 00:10:45.186 00:10:45.186 ' 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.186 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.186 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:45.187 17:56:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:53.328 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:53.329 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:53.329 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:53.329 Found net devices under 0000:98:00.0: mlx_0_0 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:53.329 Found net devices under 0000:98:00.1: mlx_0_1 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:53.329 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.329 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:10:53.329 altname enp152s0f0np0 00:10:53.329 altname ens817f0np0 00:10:53.329 inet 192.168.100.8/24 scope global mlx_0_0 00:10:53.329 valid_lft forever preferred_lft forever 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:53.329 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.329 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:10:53.329 altname enp152s0f1np1 00:10:53.329 altname ens817f1np1 00:10:53.329 inet 192.168.100.9/24 scope global mlx_0_1 00:10:53.329 valid_lft forever preferred_lft forever 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:53.329 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:53.330 192.168.100.9' 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:53.330 192.168.100.9' 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:53.330 192.168.100.9' 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3453009 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3453009 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3453009 ']' 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:53.330 17:56:11 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.330 [2024-11-20 17:56:11.726362] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:10:53.330 [2024-11-20 17:56:11.726436] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.330 [2024-11-20 17:56:11.810006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.330 [2024-11-20 17:56:11.851406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.330 [2024-11-20 17:56:11.851438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.330 [2024-11-20 17:56:11.851446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.330 [2024-11-20 17:56:11.851453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.330 [2024-11-20 17:56:11.851459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.330 [2024-11-20 17:56:11.852983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.330 [2024-11-20 17:56:11.853100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.330 [2024-11-20 17:56:11.853256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.330 [2024-11-20 17:56:11.853257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.276 [2024-11-20 17:56:12.601803] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22fff30/0x2304420) succeed. 00:10:54.276 [2024-11-20 17:56:12.616541] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23015c0/0x2345ac0) succeed. 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.276 Null1 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.276 [2024-11-20 17:56:12.803297] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.276 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 Null2 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 Null3 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.580 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 Null4 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.581 17:56:12 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 4420 00:10:54.581 00:10:54.581 Discovery Log Number of Records 6, Generation counter 6 00:10:54.581 =====Discovery Log Entry 0====== 00:10:54.581 trtype: rdma 00:10:54.581 adrfam: ipv4 00:10:54.581 subtype: current discovery subsystem 00:10:54.581 treq: not required 00:10:54.581 portid: 0 00:10:54.581 trsvcid: 4420 00:10:54.581 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:54.581 traddr: 192.168.100.8 00:10:54.581 eflags: explicit discovery connections, duplicate discovery information 00:10:54.581 rdma_prtype: not specified 00:10:54.581 rdma_qptype: connected 00:10:54.581 rdma_cms: rdma-cm 00:10:54.581 rdma_pkey: 0x0000 00:10:54.581 =====Discovery Log Entry 1====== 00:10:54.581 trtype: rdma 00:10:54.581 adrfam: ipv4 00:10:54.581 subtype: nvme subsystem 00:10:54.581 treq: not required 00:10:54.581 portid: 0 00:10:54.581 trsvcid: 4420 00:10:54.581 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:54.581 traddr: 192.168.100.8 00:10:54.581 eflags: none 00:10:54.581 rdma_prtype: not specified 00:10:54.581 rdma_qptype: connected 00:10:54.581 rdma_cms: rdma-cm 00:10:54.581 rdma_pkey: 0x0000 00:10:54.581 =====Discovery Log Entry 2====== 00:10:54.581 trtype: rdma 00:10:54.581 adrfam: ipv4 00:10:54.581 subtype: nvme subsystem 00:10:54.581 treq: not required 00:10:54.581 portid: 0 00:10:54.581 trsvcid: 4420 00:10:54.581 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:54.581 traddr: 192.168.100.8 00:10:54.581 eflags: none 00:10:54.581 rdma_prtype: not specified 00:10:54.581 rdma_qptype: connected 00:10:54.581 rdma_cms: rdma-cm 00:10:54.581 rdma_pkey: 0x0000 00:10:54.581 =====Discovery Log Entry 3====== 00:10:54.581 trtype: rdma 00:10:54.581 adrfam: ipv4 00:10:54.581 subtype: nvme subsystem 00:10:54.581 treq: not required 00:10:54.581 portid: 0 00:10:54.581 trsvcid: 4420 00:10:54.581 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:54.581 traddr: 192.168.100.8 00:10:54.581 eflags: none 00:10:54.581 rdma_prtype: not specified 00:10:54.581 rdma_qptype: connected 00:10:54.581 rdma_cms: rdma-cm 00:10:54.581 rdma_pkey: 0x0000 00:10:54.581 =====Discovery Log Entry 4====== 00:10:54.581 trtype: rdma 00:10:54.581 adrfam: ipv4 00:10:54.581 subtype: nvme subsystem 00:10:54.581 treq: not required 00:10:54.581 portid: 0 00:10:54.581 trsvcid: 4420 00:10:54.581 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:54.581 traddr: 192.168.100.8 00:10:54.581 eflags: none 00:10:54.581 rdma_prtype: not specified 00:10:54.581 rdma_qptype: connected 00:10:54.581 rdma_cms: rdma-cm 00:10:54.581 rdma_pkey: 0x0000 00:10:54.581 =====Discovery Log Entry 5====== 00:10:54.581 trtype: rdma 00:10:54.581 adrfam: ipv4 00:10:54.581 subtype: discovery subsystem referral 00:10:54.581 treq: not required 00:10:54.581 portid: 0 00:10:54.581 trsvcid: 4430 00:10:54.581 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:54.581 traddr: 192.168.100.8 00:10:54.581 eflags: none 00:10:54.581 rdma_prtype: unrecognized 00:10:54.581 rdma_qptype: unrecognized 00:10:54.581 rdma_cms: unrecognized 00:10:54.581 rdma_pkey: 0x0000 00:10:54.581 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:54.581 Perform nvmf subsystem discovery via RPC 00:10:54.581 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:54.581 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.581 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.581 [ 00:10:54.581 { 00:10:54.581 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:54.581 "subtype": "Discovery", 00:10:54.581 "listen_addresses": [ 00:10:54.581 { 00:10:54.581 "trtype": "RDMA", 00:10:54.581 "adrfam": "IPv4", 00:10:54.581 "traddr": "192.168.100.8", 00:10:54.581 "trsvcid": "4420" 00:10:54.581 } 00:10:54.581 ], 00:10:54.581 "allow_any_host": true, 00:10:54.581 "hosts": [] 00:10:54.581 }, 00:10:54.581 { 00:10:54.581 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.581 "subtype": "NVMe", 00:10:54.581 "listen_addresses": [ 00:10:54.581 { 00:10:54.581 "trtype": "RDMA", 00:10:54.581 "adrfam": "IPv4", 00:10:54.581 "traddr": "192.168.100.8", 00:10:54.581 "trsvcid": "4420" 00:10:54.581 } 00:10:54.581 ], 00:10:54.581 "allow_any_host": true, 00:10:54.581 "hosts": [], 00:10:54.581 "serial_number": "SPDK00000000000001", 00:10:54.582 "model_number": "SPDK bdev Controller", 00:10:54.582 "max_namespaces": 32, 00:10:54.582 "min_cntlid": 1, 00:10:54.582 "max_cntlid": 65519, 00:10:54.582 "namespaces": [ 00:10:54.582 { 00:10:54.582 "nsid": 1, 00:10:54.582 "bdev_name": "Null1", 00:10:54.582 "name": "Null1", 00:10:54.582 "nguid": "A85B0111553342648B94AB5C7D43FCC1", 00:10:54.582 "uuid": "a85b0111-5533-4264-8b94-ab5c7d43fcc1" 00:10:54.582 } 00:10:54.582 ] 00:10:54.582 }, 00:10:54.582 { 00:10:54.582 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:54.582 "subtype": "NVMe", 00:10:54.582 "listen_addresses": [ 00:10:54.582 { 00:10:54.582 "trtype": "RDMA", 00:10:54.582 "adrfam": "IPv4", 00:10:54.582 "traddr": "192.168.100.8", 00:10:54.582 "trsvcid": "4420" 00:10:54.582 } 00:10:54.582 ], 00:10:54.582 "allow_any_host": true, 00:10:54.582 "hosts": [], 00:10:54.582 "serial_number": "SPDK00000000000002", 00:10:54.582 "model_number": "SPDK bdev Controller", 00:10:54.582 "max_namespaces": 32, 00:10:54.582 "min_cntlid": 1, 00:10:54.582 "max_cntlid": 65519, 00:10:54.582 "namespaces": [ 00:10:54.582 { 00:10:54.582 "nsid": 1, 00:10:54.582 "bdev_name": "Null2", 00:10:54.582 "name": "Null2", 00:10:54.582 "nguid": "BA5B6162FB214D20AB33D5C8A8E464B4", 00:10:54.582 "uuid": "ba5b6162-fb21-4d20-ab33-d5c8a8e464b4" 00:10:54.582 } 00:10:54.582 ] 00:10:54.582 }, 00:10:54.582 { 00:10:54.582 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:54.582 "subtype": "NVMe", 00:10:54.582 "listen_addresses": [ 00:10:54.582 { 00:10:54.582 "trtype": "RDMA", 00:10:54.582 "adrfam": "IPv4", 00:10:54.582 "traddr": "192.168.100.8", 00:10:54.582 "trsvcid": "4420" 00:10:54.582 } 00:10:54.582 ], 00:10:54.582 "allow_any_host": true, 00:10:54.582 "hosts": [], 00:10:54.582 "serial_number": "SPDK00000000000003", 00:10:54.582 "model_number": "SPDK bdev Controller", 00:10:54.582 "max_namespaces": 32, 00:10:54.582 "min_cntlid": 1, 00:10:54.582 "max_cntlid": 65519, 00:10:54.582 "namespaces": [ 00:10:54.582 { 00:10:54.582 "nsid": 1, 00:10:54.582 "bdev_name": "Null3", 00:10:54.582 "name": "Null3", 00:10:54.582 "nguid": "5C28FF5628814A2990A0B17823748C44", 00:10:54.582 "uuid": "5c28ff56-2881-4a29-90a0-b17823748c44" 00:10:54.582 } 00:10:54.582 ] 00:10:54.582 }, 00:10:54.582 { 00:10:54.582 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:54.582 "subtype": "NVMe", 00:10:54.582 "listen_addresses": [ 00:10:54.582 { 00:10:54.582 "trtype": "RDMA", 00:10:54.582 "adrfam": "IPv4", 00:10:54.582 "traddr": "192.168.100.8", 00:10:54.582 "trsvcid": "4420" 00:10:54.582 } 00:10:54.582 ], 00:10:54.582 "allow_any_host": true, 00:10:54.582 "hosts": [], 00:10:54.582 "serial_number": "SPDK00000000000004", 00:10:54.582 "model_number": "SPDK bdev Controller", 00:10:54.582 "max_namespaces": 32, 00:10:54.582 "min_cntlid": 1, 00:10:54.582 "max_cntlid": 65519, 00:10:54.582 "namespaces": [ 00:10:54.582 { 00:10:54.582 "nsid": 1, 00:10:54.582 "bdev_name": "Null4", 00:10:54.582 "name": "Null4", 00:10:54.582 "nguid": "AF59866A168A43758E50AE60B820B545", 00:10:54.844 "uuid": "af59866a-168a-4375-8e50-ae60b820b545" 00:10:54.845 } 00:10:54.845 ] 00:10:54.845 } 00:10:54.845 ] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:54.845 rmmod nvme_rdma 00:10:54.845 rmmod nvme_fabrics 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3453009 ']' 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3453009 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3453009 ']' 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3453009 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.845 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3453009 00:10:55.106 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.106 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.106 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3453009' 00:10:55.106 killing process with pid 3453009 00:10:55.106 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3453009 00:10:55.106 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3453009 00:10:55.106 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.106 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:55.106 00:10:55.106 real 0m10.137s 00:10:55.106 user 0m9.266s 00:10:55.106 sys 0m6.515s 00:10:55.106 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.106 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.107 ************************************ 00:10:55.107 END TEST nvmf_target_discovery 00:10:55.107 ************************************ 00:10:55.107 17:56:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:10:55.107 17:56:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.107 17:56:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.107 17:56:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.369 ************************************ 00:10:55.369 START TEST nvmf_referrals 00:10:55.369 ************************************ 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:10:55.369 * Looking for test storage... 00:10:55.369 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:55.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.369 --rc genhtml_branch_coverage=1 00:10:55.369 --rc genhtml_function_coverage=1 00:10:55.369 --rc genhtml_legend=1 00:10:55.369 --rc geninfo_all_blocks=1 00:10:55.369 --rc geninfo_unexecuted_blocks=1 00:10:55.369 00:10:55.369 ' 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:55.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.369 --rc genhtml_branch_coverage=1 00:10:55.369 --rc genhtml_function_coverage=1 00:10:55.369 --rc genhtml_legend=1 00:10:55.369 --rc geninfo_all_blocks=1 00:10:55.369 --rc geninfo_unexecuted_blocks=1 00:10:55.369 00:10:55.369 ' 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:55.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.369 --rc genhtml_branch_coverage=1 00:10:55.369 --rc genhtml_function_coverage=1 00:10:55.369 --rc genhtml_legend=1 00:10:55.369 --rc geninfo_all_blocks=1 00:10:55.369 --rc geninfo_unexecuted_blocks=1 00:10:55.369 00:10:55.369 ' 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:55.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.369 --rc genhtml_branch_coverage=1 00:10:55.369 --rc genhtml_function_coverage=1 00:10:55.369 --rc genhtml_legend=1 00:10:55.369 --rc geninfo_all_blocks=1 00:10:55.369 --rc geninfo_unexecuted_blocks=1 00:10:55.369 00:10:55.369 ' 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.369 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.370 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.370 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.631 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:55.631 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:55.631 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.631 17:56:13 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.771 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.771 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:03.771 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:03.771 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:03.771 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:03.771 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:03.771 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:03.772 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:03.772 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:03.772 Found net devices under 0000:98:00.0: mlx_0_0 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:03.772 Found net devices under 0000:98:00.1: mlx_0_1 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:03.772 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:03.772 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:03.772 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:11:03.773 altname enp152s0f0np0 00:11:03.773 altname ens817f0np0 00:11:03.773 inet 192.168.100.8/24 scope global mlx_0_0 00:11:03.773 valid_lft forever preferred_lft forever 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:03.773 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:03.773 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:11:03.773 altname enp152s0f1np1 00:11:03.773 altname ens817f1np1 00:11:03.773 inet 192.168.100.9/24 scope global mlx_0_1 00:11:03.773 valid_lft forever preferred_lft forever 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:03.773 192.168.100.9' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:03.773 192.168.100.9' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:03.773 192.168.100.9' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:03.773 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3457790 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3457790 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3457790 ']' 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.034 17:56:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.034 [2024-11-20 17:56:22.402783] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:11:04.034 [2024-11-20 17:56:22.402859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.034 [2024-11-20 17:56:22.488404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.034 [2024-11-20 17:56:22.530336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.034 [2024-11-20 17:56:22.530373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.034 [2024-11-20 17:56:22.530381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.034 [2024-11-20 17:56:22.530389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.034 [2024-11-20 17:56:22.530395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.034 [2024-11-20 17:56:22.531959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.034 [2024-11-20 17:56:22.532064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.034 [2024-11-20 17:56:22.532221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.034 [2024-11-20 17:56:22.532222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.995 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.995 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:04.995 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.995 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.995 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.995 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.995 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.996 [2024-11-20 17:56:23.285431] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xeeff30/0xef4420) succeed. 00:11:04.996 [2024-11-20 17:56:23.299925] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xef15c0/0xf35ac0) succeed. 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.996 [2024-11-20 17:56:23.438997] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.996 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.319 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:05.319 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:05.319 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:05.319 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:05.320 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:05.591 17:56:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:05.591 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:05.591 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:05.591 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:05.591 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:05.591 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:05.591 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:05.591 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:05.853 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:06.115 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.377 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:06.639 rmmod nvme_rdma 00:11:06.639 rmmod nvme_fabrics 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3457790 ']' 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3457790 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3457790 ']' 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3457790 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.639 17:56:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3457790 00:11:06.639 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.639 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.639 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3457790' 00:11:06.639 killing process with pid 3457790 00:11:06.639 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3457790 00:11:06.639 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3457790 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:06.901 00:11:06.901 real 0m11.559s 00:11:06.901 user 0m14.031s 00:11:06.901 sys 0m7.225s 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.901 ************************************ 00:11:06.901 END TEST nvmf_referrals 00:11:06.901 ************************************ 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.901 ************************************ 00:11:06.901 START TEST nvmf_connect_disconnect 00:11:06.901 ************************************ 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:06.901 * Looking for test storage... 00:11:06.901 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.901 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:07.163 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:07.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.164 --rc genhtml_branch_coverage=1 00:11:07.164 --rc genhtml_function_coverage=1 00:11:07.164 --rc genhtml_legend=1 00:11:07.164 --rc geninfo_all_blocks=1 00:11:07.164 --rc geninfo_unexecuted_blocks=1 00:11:07.164 00:11:07.164 ' 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:07.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.164 --rc genhtml_branch_coverage=1 00:11:07.164 --rc genhtml_function_coverage=1 00:11:07.164 --rc genhtml_legend=1 00:11:07.164 --rc geninfo_all_blocks=1 00:11:07.164 --rc geninfo_unexecuted_blocks=1 00:11:07.164 00:11:07.164 ' 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:07.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.164 --rc genhtml_branch_coverage=1 00:11:07.164 --rc genhtml_function_coverage=1 00:11:07.164 --rc genhtml_legend=1 00:11:07.164 --rc geninfo_all_blocks=1 00:11:07.164 --rc geninfo_unexecuted_blocks=1 00:11:07.164 00:11:07.164 ' 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:07.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.164 --rc genhtml_branch_coverage=1 00:11:07.164 --rc genhtml_function_coverage=1 00:11:07.164 --rc genhtml_legend=1 00:11:07.164 --rc geninfo_all_blocks=1 00:11:07.164 --rc geninfo_unexecuted_blocks=1 00:11:07.164 00:11:07.164 ' 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:07.164 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:07.165 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:07.165 17:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:15.308 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:15.309 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:15.309 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:15.309 Found net devices under 0000:98:00.0: mlx_0_0 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:15.309 Found net devices under 0000:98:00.1: mlx_0_1 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:15.309 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:15.310 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:15.310 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:11:15.310 altname enp152s0f0np0 00:11:15.310 altname ens817f0np0 00:11:15.310 inet 192.168.100.8/24 scope global mlx_0_0 00:11:15.310 valid_lft forever preferred_lft forever 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:15.310 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:15.310 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:11:15.310 altname enp152s0f1np1 00:11:15.310 altname ens817f1np1 00:11:15.310 inet 192.168.100.9/24 scope global mlx_0_1 00:11:15.310 valid_lft forever preferred_lft forever 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:15.310 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:15.586 192.168.100.9' 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:15.586 192.168.100.9' 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:15.586 192.168.100.9' 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3462730 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3462730 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3462730 ']' 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.586 17:56:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.587 [2024-11-20 17:56:33.987828] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:11:15.587 [2024-11-20 17:56:33.987888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.587 [2024-11-20 17:56:34.068014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.587 [2024-11-20 17:56:34.103853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.587 [2024-11-20 17:56:34.103892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.587 [2024-11-20 17:56:34.103901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.587 [2024-11-20 17:56:34.103908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.587 [2024-11-20 17:56:34.103913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.587 [2024-11-20 17:56:34.105683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.587 [2024-11-20 17:56:34.105798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.587 [2024-11-20 17:56:34.105953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.587 [2024-11-20 17:56:34.105953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.530 [2024-11-20 17:56:34.836923] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:16.530 [2024-11-20 17:56:34.868452] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b26f30/0x1b2b420) succeed. 00:11:16.530 [2024-11-20 17:56:34.883211] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b285c0/0x1b6cac0) succeed. 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.530 17:56:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.530 [2024-11-20 17:56:35.051815] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:16.530 17:56:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:21.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.761 17:56:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:39.761 17:56:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:39.761 17:56:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.761 17:56:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:39.761 17:56:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:39.761 17:56:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:39.761 17:56:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:39.761 17:56:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.761 17:56:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:39.761 rmmod nvme_rdma 00:11:39.761 rmmod nvme_fabrics 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3462730 ']' 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3462730 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3462730 ']' 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3462730 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3462730 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3462730' 00:11:39.761 killing process with pid 3462730 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3462730 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3462730 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:39.761 00:11:39.761 real 0m32.977s 00:11:39.761 user 1m38.574s 00:11:39.761 sys 0m7.549s 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.761 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:39.761 ************************************ 00:11:39.761 END TEST nvmf_connect_disconnect 00:11:39.761 ************************************ 00:11:40.023 17:56:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.024 ************************************ 00:11:40.024 START TEST nvmf_multitarget 00:11:40.024 ************************************ 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:40.024 * Looking for test storage... 00:11:40.024 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:40.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.024 --rc genhtml_branch_coverage=1 00:11:40.024 --rc genhtml_function_coverage=1 00:11:40.024 --rc genhtml_legend=1 00:11:40.024 --rc geninfo_all_blocks=1 00:11:40.024 --rc geninfo_unexecuted_blocks=1 00:11:40.024 00:11:40.024 ' 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:40.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.024 --rc genhtml_branch_coverage=1 00:11:40.024 --rc genhtml_function_coverage=1 00:11:40.024 --rc genhtml_legend=1 00:11:40.024 --rc geninfo_all_blocks=1 00:11:40.024 --rc geninfo_unexecuted_blocks=1 00:11:40.024 00:11:40.024 ' 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:40.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.024 --rc genhtml_branch_coverage=1 00:11:40.024 --rc genhtml_function_coverage=1 00:11:40.024 --rc genhtml_legend=1 00:11:40.024 --rc geninfo_all_blocks=1 00:11:40.024 --rc geninfo_unexecuted_blocks=1 00:11:40.024 00:11:40.024 ' 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:40.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.024 --rc genhtml_branch_coverage=1 00:11:40.024 --rc genhtml_function_coverage=1 00:11:40.024 --rc genhtml_legend=1 00:11:40.024 --rc geninfo_all_blocks=1 00:11:40.024 --rc geninfo_unexecuted_blocks=1 00:11:40.024 00:11:40.024 ' 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.024 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.025 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.025 17:56:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:48.169 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:48.169 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:48.169 Found net devices under 0000:98:00.0: mlx_0_0 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:48.169 Found net devices under 0000:98:00.1: mlx_0_1 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:48.169 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:48.170 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:48.170 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:11:48.170 altname enp152s0f0np0 00:11:48.170 altname ens817f0np0 00:11:48.170 inet 192.168.100.8/24 scope global mlx_0_0 00:11:48.170 valid_lft forever preferred_lft forever 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:48.170 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:48.170 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:11:48.170 altname enp152s0f1np1 00:11:48.170 altname ens817f1np1 00:11:48.170 inet 192.168.100.9/24 scope global mlx_0_1 00:11:48.170 valid_lft forever preferred_lft forever 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:48.170 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:48.433 192.168.100.9' 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:48.433 192.168.100.9' 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:48.433 192.168.100.9' 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3471985 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3471985 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3471985 ']' 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.433 17:57:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:48.433 [2024-11-20 17:57:06.849280] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:11:48.433 [2024-11-20 17:57:06.849332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.433 [2024-11-20 17:57:06.931108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.695 [2024-11-20 17:57:06.966804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.695 [2024-11-20 17:57:06.966838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.695 [2024-11-20 17:57:06.966847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.695 [2024-11-20 17:57:06.966855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.695 [2024-11-20 17:57:06.966868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.695 [2024-11-20 17:57:06.968420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.695 [2024-11-20 17:57:06.968533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.695 [2024-11-20 17:57:06.968686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.695 [2024-11-20 17:57:06.968687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:48.695 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:48.956 "nvmf_tgt_1" 00:11:48.956 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:48.956 "nvmf_tgt_2" 00:11:48.956 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:48.956 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:49.218 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:49.218 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:49.218 true 00:11:49.218 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:49.218 true 00:11:49.218 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:49.218 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:49.479 rmmod nvme_rdma 00:11:49.479 rmmod nvme_fabrics 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3471985 ']' 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3471985 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3471985 ']' 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3471985 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3471985 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3471985' 00:11:49.479 killing process with pid 3471985 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3471985 00:11:49.479 17:57:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3471985 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:49.740 00:11:49.740 real 0m9.728s 00:11:49.740 user 0m7.661s 00:11:49.740 sys 0m6.532s 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.740 ************************************ 00:11:49.740 END TEST nvmf_multitarget 00:11:49.740 ************************************ 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.740 ************************************ 00:11:49.740 START TEST nvmf_rpc 00:11:49.740 ************************************ 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:11:49.740 * Looking for test storage... 00:11:49.740 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:49.740 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:50.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.003 --rc genhtml_branch_coverage=1 00:11:50.003 --rc genhtml_function_coverage=1 00:11:50.003 --rc genhtml_legend=1 00:11:50.003 --rc geninfo_all_blocks=1 00:11:50.003 --rc geninfo_unexecuted_blocks=1 00:11:50.003 00:11:50.003 ' 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:50.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.003 --rc genhtml_branch_coverage=1 00:11:50.003 --rc genhtml_function_coverage=1 00:11:50.003 --rc genhtml_legend=1 00:11:50.003 --rc geninfo_all_blocks=1 00:11:50.003 --rc geninfo_unexecuted_blocks=1 00:11:50.003 00:11:50.003 ' 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:50.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.003 --rc genhtml_branch_coverage=1 00:11:50.003 --rc genhtml_function_coverage=1 00:11:50.003 --rc genhtml_legend=1 00:11:50.003 --rc geninfo_all_blocks=1 00:11:50.003 --rc geninfo_unexecuted_blocks=1 00:11:50.003 00:11:50.003 ' 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:50.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.003 --rc genhtml_branch_coverage=1 00:11:50.003 --rc genhtml_function_coverage=1 00:11:50.003 --rc genhtml_legend=1 00:11:50.003 --rc geninfo_all_blocks=1 00:11:50.003 --rc geninfo_unexecuted_blocks=1 00:11:50.003 00:11:50.003 ' 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.003 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.004 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.004 17:57:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:58.153 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:58.154 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:58.154 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:58.154 Found net devices under 0000:98:00.0: mlx_0_0 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:58.154 Found net devices under 0000:98:00.1: mlx_0_1 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:58.154 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:58.154 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:11:58.154 altname enp152s0f0np0 00:11:58.154 altname ens817f0np0 00:11:58.154 inet 192.168.100.8/24 scope global mlx_0_0 00:11:58.154 valid_lft forever preferred_lft forever 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:58.154 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:58.154 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:11:58.154 altname enp152s0f1np1 00:11:58.154 altname ens817f1np1 00:11:58.154 inet 192.168.100.9/24 scope global mlx_0_1 00:11:58.154 valid_lft forever preferred_lft forever 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.154 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:58.155 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:58.415 192.168.100.9' 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:58.415 192.168.100.9' 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:58.415 192.168.100.9' 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3477141 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3477141 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3477141 ']' 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.415 17:57:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.415 [2024-11-20 17:57:16.808911] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:11:58.415 [2024-11-20 17:57:16.808983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.415 [2024-11-20 17:57:16.892912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.415 [2024-11-20 17:57:16.934502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.415 [2024-11-20 17:57:16.934537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.415 [2024-11-20 17:57:16.934545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.415 [2024-11-20 17:57:16.934556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.415 [2024-11-20 17:57:16.934562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.415 [2024-11-20 17:57:16.936181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.415 [2024-11-20 17:57:16.936298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.415 [2024-11-20 17:57:16.936455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.415 [2024-11-20 17:57:16.936456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:59.356 "tick_rate": 2400000000, 00:11:59.356 "poll_groups": [ 00:11:59.356 { 00:11:59.356 "name": "nvmf_tgt_poll_group_000", 00:11:59.356 "admin_qpairs": 0, 00:11:59.356 "io_qpairs": 0, 00:11:59.356 "current_admin_qpairs": 0, 00:11:59.356 "current_io_qpairs": 0, 00:11:59.356 "pending_bdev_io": 0, 00:11:59.356 "completed_nvme_io": 0, 00:11:59.356 "transports": [] 00:11:59.356 }, 00:11:59.356 { 00:11:59.356 "name": "nvmf_tgt_poll_group_001", 00:11:59.356 "admin_qpairs": 0, 00:11:59.356 "io_qpairs": 0, 00:11:59.356 "current_admin_qpairs": 0, 00:11:59.356 "current_io_qpairs": 0, 00:11:59.356 "pending_bdev_io": 0, 00:11:59.356 "completed_nvme_io": 0, 00:11:59.356 "transports": [] 00:11:59.356 }, 00:11:59.356 { 00:11:59.356 "name": "nvmf_tgt_poll_group_002", 00:11:59.356 "admin_qpairs": 0, 00:11:59.356 "io_qpairs": 0, 00:11:59.356 "current_admin_qpairs": 0, 00:11:59.356 "current_io_qpairs": 0, 00:11:59.356 "pending_bdev_io": 0, 00:11:59.356 "completed_nvme_io": 0, 00:11:59.356 "transports": [] 00:11:59.356 }, 00:11:59.356 { 00:11:59.356 "name": "nvmf_tgt_poll_group_003", 00:11:59.356 "admin_qpairs": 0, 00:11:59.356 "io_qpairs": 0, 00:11:59.356 "current_admin_qpairs": 0, 00:11:59.356 "current_io_qpairs": 0, 00:11:59.356 "pending_bdev_io": 0, 00:11:59.356 "completed_nvme_io": 0, 00:11:59.356 "transports": [] 00:11:59.356 } 00:11:59.356 ] 00:11:59.356 }' 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.356 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.356 [2024-11-20 17:57:17.816245] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x203af20/0x203f410) succeed. 00:11:59.356 [2024-11-20 17:57:17.830827] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x203c5b0/0x2080ab0) succeed. 00:11:59.618 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.618 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:59.618 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.618 17:57:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.618 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.618 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:59.618 "tick_rate": 2400000000, 00:11:59.618 "poll_groups": [ 00:11:59.618 { 00:11:59.618 "name": "nvmf_tgt_poll_group_000", 00:11:59.618 "admin_qpairs": 0, 00:11:59.618 "io_qpairs": 0, 00:11:59.618 "current_admin_qpairs": 0, 00:11:59.618 "current_io_qpairs": 0, 00:11:59.618 "pending_bdev_io": 0, 00:11:59.618 "completed_nvme_io": 0, 00:11:59.618 "transports": [ 00:11:59.618 { 00:11:59.618 "trtype": "RDMA", 00:11:59.618 "pending_data_buffer": 0, 00:11:59.618 "devices": [ 00:11:59.618 { 00:11:59.618 "name": "mlx5_0", 00:11:59.618 "polls": 16074, 00:11:59.618 "idle_polls": 16074, 00:11:59.618 "completions": 0, 00:11:59.618 "requests": 0, 00:11:59.618 "request_latency": 0, 00:11:59.618 "pending_free_request": 0, 00:11:59.618 "pending_rdma_read": 0, 00:11:59.618 "pending_rdma_write": 0, 00:11:59.618 "pending_rdma_send": 0, 00:11:59.618 "total_send_wrs": 0, 00:11:59.618 "send_doorbell_updates": 0, 00:11:59.618 "total_recv_wrs": 4096, 00:11:59.618 "recv_doorbell_updates": 1 00:11:59.618 }, 00:11:59.618 { 00:11:59.618 "name": "mlx5_1", 00:11:59.618 "polls": 16074, 00:11:59.618 "idle_polls": 16074, 00:11:59.618 "completions": 0, 00:11:59.618 "requests": 0, 00:11:59.618 "request_latency": 0, 00:11:59.618 "pending_free_request": 0, 00:11:59.618 "pending_rdma_read": 0, 00:11:59.618 "pending_rdma_write": 0, 00:11:59.618 "pending_rdma_send": 0, 00:11:59.618 "total_send_wrs": 0, 00:11:59.618 "send_doorbell_updates": 0, 00:11:59.618 "total_recv_wrs": 4096, 00:11:59.618 "recv_doorbell_updates": 1 00:11:59.618 } 00:11:59.618 ] 00:11:59.618 } 00:11:59.618 ] 00:11:59.618 }, 00:11:59.618 { 00:11:59.618 "name": "nvmf_tgt_poll_group_001", 00:11:59.618 "admin_qpairs": 0, 00:11:59.618 "io_qpairs": 0, 00:11:59.618 "current_admin_qpairs": 0, 00:11:59.618 "current_io_qpairs": 0, 00:11:59.618 "pending_bdev_io": 0, 00:11:59.618 "completed_nvme_io": 0, 00:11:59.618 "transports": [ 00:11:59.618 { 00:11:59.618 "trtype": "RDMA", 00:11:59.618 "pending_data_buffer": 0, 00:11:59.618 "devices": [ 00:11:59.618 { 00:11:59.618 "name": "mlx5_0", 00:11:59.618 "polls": 16137, 00:11:59.618 "idle_polls": 16137, 00:11:59.618 "completions": 0, 00:11:59.618 "requests": 0, 00:11:59.618 "request_latency": 0, 00:11:59.618 "pending_free_request": 0, 00:11:59.618 "pending_rdma_read": 0, 00:11:59.618 "pending_rdma_write": 0, 00:11:59.618 "pending_rdma_send": 0, 00:11:59.618 "total_send_wrs": 0, 00:11:59.618 "send_doorbell_updates": 0, 00:11:59.618 "total_recv_wrs": 4096, 00:11:59.618 "recv_doorbell_updates": 1 00:11:59.618 }, 00:11:59.618 { 00:11:59.618 "name": "mlx5_1", 00:11:59.618 "polls": 16137, 00:11:59.618 "idle_polls": 16137, 00:11:59.618 "completions": 0, 00:11:59.618 "requests": 0, 00:11:59.618 "request_latency": 0, 00:11:59.618 "pending_free_request": 0, 00:11:59.618 "pending_rdma_read": 0, 00:11:59.618 "pending_rdma_write": 0, 00:11:59.618 "pending_rdma_send": 0, 00:11:59.618 "total_send_wrs": 0, 00:11:59.618 "send_doorbell_updates": 0, 00:11:59.618 "total_recv_wrs": 4096, 00:11:59.618 "recv_doorbell_updates": 1 00:11:59.618 } 00:11:59.618 ] 00:11:59.618 } 00:11:59.618 ] 00:11:59.618 }, 00:11:59.618 { 00:11:59.618 "name": "nvmf_tgt_poll_group_002", 00:11:59.618 "admin_qpairs": 0, 00:11:59.618 "io_qpairs": 0, 00:11:59.618 "current_admin_qpairs": 0, 00:11:59.618 "current_io_qpairs": 0, 00:11:59.618 "pending_bdev_io": 0, 00:11:59.618 "completed_nvme_io": 0, 00:11:59.618 "transports": [ 00:11:59.618 { 00:11:59.618 "trtype": "RDMA", 00:11:59.618 "pending_data_buffer": 0, 00:11:59.618 "devices": [ 00:11:59.618 { 00:11:59.618 "name": "mlx5_0", 00:11:59.618 "polls": 5701, 00:11:59.618 "idle_polls": 5701, 00:11:59.618 "completions": 0, 00:11:59.618 "requests": 0, 00:11:59.618 "request_latency": 0, 00:11:59.618 "pending_free_request": 0, 00:11:59.618 "pending_rdma_read": 0, 00:11:59.618 "pending_rdma_write": 0, 00:11:59.618 "pending_rdma_send": 0, 00:11:59.618 "total_send_wrs": 0, 00:11:59.618 "send_doorbell_updates": 0, 00:11:59.618 "total_recv_wrs": 4096, 00:11:59.618 "recv_doorbell_updates": 1 00:11:59.618 }, 00:11:59.618 { 00:11:59.618 "name": "mlx5_1", 00:11:59.618 "polls": 5701, 00:11:59.618 "idle_polls": 5701, 00:11:59.618 "completions": 0, 00:11:59.618 "requests": 0, 00:11:59.618 "request_latency": 0, 00:11:59.618 "pending_free_request": 0, 00:11:59.618 "pending_rdma_read": 0, 00:11:59.618 "pending_rdma_write": 0, 00:11:59.618 "pending_rdma_send": 0, 00:11:59.618 "total_send_wrs": 0, 00:11:59.618 "send_doorbell_updates": 0, 00:11:59.618 "total_recv_wrs": 4096, 00:11:59.618 "recv_doorbell_updates": 1 00:11:59.618 } 00:11:59.618 ] 00:11:59.618 } 00:11:59.618 ] 00:11:59.618 }, 00:11:59.618 { 00:11:59.618 "name": "nvmf_tgt_poll_group_003", 00:11:59.619 "admin_qpairs": 0, 00:11:59.619 "io_qpairs": 0, 00:11:59.619 "current_admin_qpairs": 0, 00:11:59.619 "current_io_qpairs": 0, 00:11:59.619 "pending_bdev_io": 0, 00:11:59.619 "completed_nvme_io": 0, 00:11:59.619 "transports": [ 00:11:59.619 { 00:11:59.619 "trtype": "RDMA", 00:11:59.619 "pending_data_buffer": 0, 00:11:59.619 "devices": [ 00:11:59.619 { 00:11:59.619 "name": "mlx5_0", 00:11:59.619 "polls": 843, 00:11:59.619 "idle_polls": 843, 00:11:59.619 "completions": 0, 00:11:59.619 "requests": 0, 00:11:59.619 "request_latency": 0, 00:11:59.619 "pending_free_request": 0, 00:11:59.619 "pending_rdma_read": 0, 00:11:59.619 "pending_rdma_write": 0, 00:11:59.619 "pending_rdma_send": 0, 00:11:59.619 "total_send_wrs": 0, 00:11:59.619 "send_doorbell_updates": 0, 00:11:59.619 "total_recv_wrs": 4096, 00:11:59.619 "recv_doorbell_updates": 1 00:11:59.619 }, 00:11:59.619 { 00:11:59.619 "name": "mlx5_1", 00:11:59.619 "polls": 843, 00:11:59.619 "idle_polls": 843, 00:11:59.619 "completions": 0, 00:11:59.619 "requests": 0, 00:11:59.619 "request_latency": 0, 00:11:59.619 "pending_free_request": 0, 00:11:59.619 "pending_rdma_read": 0, 00:11:59.619 "pending_rdma_write": 0, 00:11:59.619 "pending_rdma_send": 0, 00:11:59.619 "total_send_wrs": 0, 00:11:59.619 "send_doorbell_updates": 0, 00:11:59.619 "total_recv_wrs": 4096, 00:11:59.619 "recv_doorbell_updates": 1 00:11:59.619 } 00:11:59.619 ] 00:11:59.619 } 00:11:59.619 ] 00:11:59.619 } 00:11:59.619 ] 00:11:59.619 }' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:11:59.619 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.880 Malloc1 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.880 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.881 [2024-11-20 17:57:18.322845] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:59.881 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:11:59.881 [2024-11-20 17:57:18.377683] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:00.141 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:00.141 could not add new controller: failed to write to nvme-fabrics device 00:12:00.141 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:00.141 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:00.141 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:00.141 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:00.141 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:00.141 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.141 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.141 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.141 17:57:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:01.529 17:57:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.529 17:57:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:01.529 17:57:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.529 17:57:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:01.529 17:57:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:03.517 17:57:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:03.517 17:57:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:03.517 17:57:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.517 17:57:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:03.517 17:57:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.517 17:57:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:03.517 17:57:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.488 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.488 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:04.488 17:57:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.488 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:04.488 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:04.488 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:04.749 [2024-11-20 17:57:23.040896] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:04.749 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:04.749 could not add new controller: failed to write to nvme-fabrics device 00:12:04.749 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:04.749 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.749 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:04.749 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.750 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:04.750 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.750 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.750 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.750 17:57:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:06.137 17:57:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.137 17:57:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:06.137 17:57:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.137 17:57:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:06.137 17:57:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:08.051 17:57:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:08.051 17:57:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:08.051 17:57:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.051 17:57:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:08.051 17:57:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.051 17:57:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:08.051 17:57:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.437 [2024-11-20 17:57:27.638381] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.437 17:57:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:10.824 17:57:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.824 17:57:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.824 17:57:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.824 17:57:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:10.824 17:57:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:12.740 17:57:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:12.740 17:57:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:12.740 17:57:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.740 17:57:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:12.740 17:57:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.740 17:57:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:12.740 17:57:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.125 [2024-11-20 17:57:32.395784] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.125 17:57:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:15.068 17:57:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.068 17:57:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.068 17:57:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.068 17:57:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:15.068 17:57:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:17.618 17:57:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:17.618 17:57:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:17.618 17:57:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.618 17:57:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:17.618 17:57:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.618 17:57:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:17.618 17:57:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.561 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.562 [2024-11-20 17:57:36.914509] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.562 17:57:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:19.948 17:57:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.948 17:57:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:19.948 17:57:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.948 17:57:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:19.948 17:57:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:21.910 17:57:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:21.910 17:57:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:21.910 17:57:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.910 17:57:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:21.910 17:57:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.910 17:57:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:21.910 17:57:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.853 [2024-11-20 17:57:41.314841] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.853 17:57:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:24.241 17:57:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.241 17:57:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:24.241 17:57:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.241 17:57:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:24.241 17:57:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:26.790 17:57:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:26.790 17:57:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:26.790 17:57:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.790 17:57:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:26.790 17:57:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.790 17:57:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:26.790 17:57:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.732 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.733 [2024-11-20 17:57:45.977535] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.733 17:57:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.733 17:57:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.733 17:57:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:29.116 17:57:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.116 17:57:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:29.116 17:57:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.116 17:57:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:29.116 17:57:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:31.029 17:57:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:31.029 17:57:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:31.029 17:57:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.029 17:57:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:31.029 17:57:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.029 17:57:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:31.029 17:57:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 [2024-11-20 17:57:50.759444] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 [2024-11-20 17:57:50.823638] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 [2024-11-20 17:57:50.891850] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.418 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.419 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.419 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:32.419 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.419 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.419 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 [2024-11-20 17:57:50.960115] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.681 17:57:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 [2024-11-20 17:57:51.016282] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:32.682 "tick_rate": 2400000000, 00:12:32.682 "poll_groups": [ 00:12:32.682 { 00:12:32.682 "name": "nvmf_tgt_poll_group_000", 00:12:32.682 "admin_qpairs": 2, 00:12:32.682 "io_qpairs": 27, 00:12:32.682 "current_admin_qpairs": 0, 00:12:32.682 "current_io_qpairs": 0, 00:12:32.682 "pending_bdev_io": 0, 00:12:32.682 "completed_nvme_io": 79, 00:12:32.682 "transports": [ 00:12:32.682 { 00:12:32.682 "trtype": "RDMA", 00:12:32.682 "pending_data_buffer": 0, 00:12:32.682 "devices": [ 00:12:32.682 { 00:12:32.682 "name": "mlx5_0", 00:12:32.682 "polls": 4685237, 00:12:32.682 "idle_polls": 4684991, 00:12:32.682 "completions": 267, 00:12:32.682 "requests": 133, 00:12:32.682 "request_latency": 18768780, 00:12:32.682 "pending_free_request": 0, 00:12:32.682 "pending_rdma_read": 0, 00:12:32.682 "pending_rdma_write": 0, 00:12:32.682 "pending_rdma_send": 0, 00:12:32.682 "total_send_wrs": 211, 00:12:32.682 "send_doorbell_updates": 121, 00:12:32.682 "total_recv_wrs": 4229, 00:12:32.682 "recv_doorbell_updates": 121 00:12:32.682 }, 00:12:32.682 { 00:12:32.682 "name": "mlx5_1", 00:12:32.682 "polls": 4685237, 00:12:32.682 "idle_polls": 4685237, 00:12:32.682 "completions": 0, 00:12:32.682 "requests": 0, 00:12:32.682 "request_latency": 0, 00:12:32.682 "pending_free_request": 0, 00:12:32.682 "pending_rdma_read": 0, 00:12:32.682 "pending_rdma_write": 0, 00:12:32.682 "pending_rdma_send": 0, 00:12:32.682 "total_send_wrs": 0, 00:12:32.682 "send_doorbell_updates": 0, 00:12:32.682 "total_recv_wrs": 4096, 00:12:32.682 "recv_doorbell_updates": 1 00:12:32.682 } 00:12:32.682 ] 00:12:32.682 } 00:12:32.682 ] 00:12:32.682 }, 00:12:32.682 { 00:12:32.682 "name": "nvmf_tgt_poll_group_001", 00:12:32.682 "admin_qpairs": 2, 00:12:32.682 "io_qpairs": 26, 00:12:32.682 "current_admin_qpairs": 0, 00:12:32.682 "current_io_qpairs": 0, 00:12:32.682 "pending_bdev_io": 0, 00:12:32.682 "completed_nvme_io": 126, 00:12:32.682 "transports": [ 00:12:32.682 { 00:12:32.682 "trtype": "RDMA", 00:12:32.682 "pending_data_buffer": 0, 00:12:32.682 "devices": [ 00:12:32.682 { 00:12:32.682 "name": "mlx5_0", 00:12:32.682 "polls": 4931747, 00:12:32.682 "idle_polls": 4931433, 00:12:32.682 "completions": 356, 00:12:32.682 "requests": 178, 00:12:32.682 "request_latency": 29662746, 00:12:32.682 "pending_free_request": 0, 00:12:32.682 "pending_rdma_read": 0, 00:12:32.682 "pending_rdma_write": 0, 00:12:32.682 "pending_rdma_send": 0, 00:12:32.682 "total_send_wrs": 302, 00:12:32.682 "send_doorbell_updates": 152, 00:12:32.682 "total_recv_wrs": 4274, 00:12:32.682 "recv_doorbell_updates": 153 00:12:32.682 }, 00:12:32.682 { 00:12:32.682 "name": "mlx5_1", 00:12:32.682 "polls": 4931747, 00:12:32.682 "idle_polls": 4931747, 00:12:32.682 "completions": 0, 00:12:32.682 "requests": 0, 00:12:32.682 "request_latency": 0, 00:12:32.682 "pending_free_request": 0, 00:12:32.682 "pending_rdma_read": 0, 00:12:32.682 "pending_rdma_write": 0, 00:12:32.682 "pending_rdma_send": 0, 00:12:32.682 "total_send_wrs": 0, 00:12:32.682 "send_doorbell_updates": 0, 00:12:32.682 "total_recv_wrs": 4096, 00:12:32.682 "recv_doorbell_updates": 1 00:12:32.682 } 00:12:32.682 ] 00:12:32.682 } 00:12:32.682 ] 00:12:32.682 }, 00:12:32.682 { 00:12:32.682 "name": "nvmf_tgt_poll_group_002", 00:12:32.682 "admin_qpairs": 1, 00:12:32.682 "io_qpairs": 26, 00:12:32.682 "current_admin_qpairs": 0, 00:12:32.682 "current_io_qpairs": 0, 00:12:32.682 "pending_bdev_io": 0, 00:12:32.682 "completed_nvme_io": 124, 00:12:32.682 "transports": [ 00:12:32.682 { 00:12:32.682 "trtype": "RDMA", 00:12:32.682 "pending_data_buffer": 0, 00:12:32.682 "devices": [ 00:12:32.682 { 00:12:32.682 "name": "mlx5_0", 00:12:32.682 "polls": 4959211, 00:12:32.682 "idle_polls": 4958947, 00:12:32.682 "completions": 303, 00:12:32.682 "requests": 151, 00:12:32.682 "request_latency": 28083134, 00:12:32.682 "pending_free_request": 0, 00:12:32.682 "pending_rdma_read": 0, 00:12:32.682 "pending_rdma_write": 0, 00:12:32.682 "pending_rdma_send": 0, 00:12:32.682 "total_send_wrs": 262, 00:12:32.682 "send_doorbell_updates": 128, 00:12:32.682 "total_recv_wrs": 4247, 00:12:32.682 "recv_doorbell_updates": 128 00:12:32.682 }, 00:12:32.682 { 00:12:32.682 "name": "mlx5_1", 00:12:32.682 "polls": 4959211, 00:12:32.682 "idle_polls": 4959211, 00:12:32.682 "completions": 0, 00:12:32.682 "requests": 0, 00:12:32.682 "request_latency": 0, 00:12:32.682 "pending_free_request": 0, 00:12:32.682 "pending_rdma_read": 0, 00:12:32.682 "pending_rdma_write": 0, 00:12:32.682 "pending_rdma_send": 0, 00:12:32.682 "total_send_wrs": 0, 00:12:32.682 "send_doorbell_updates": 0, 00:12:32.682 "total_recv_wrs": 4096, 00:12:32.682 "recv_doorbell_updates": 1 00:12:32.682 } 00:12:32.682 ] 00:12:32.682 } 00:12:32.682 ] 00:12:32.682 }, 00:12:32.682 { 00:12:32.682 "name": "nvmf_tgt_poll_group_003", 00:12:32.682 "admin_qpairs": 2, 00:12:32.682 "io_qpairs": 26, 00:12:32.682 "current_admin_qpairs": 0, 00:12:32.682 "current_io_qpairs": 0, 00:12:32.682 "pending_bdev_io": 0, 00:12:32.682 "completed_nvme_io": 126, 00:12:32.682 "transports": [ 00:12:32.682 { 00:12:32.682 "trtype": "RDMA", 00:12:32.682 "pending_data_buffer": 0, 00:12:32.682 "devices": [ 00:12:32.682 { 00:12:32.682 "name": "mlx5_0", 00:12:32.682 "polls": 3316354, 00:12:32.682 "idle_polls": 3316038, 00:12:32.682 "completions": 358, 00:12:32.682 "requests": 179, 00:12:32.682 "request_latency": 40423516, 00:12:32.682 "pending_free_request": 0, 00:12:32.682 "pending_rdma_read": 0, 00:12:32.682 "pending_rdma_write": 0, 00:12:32.682 "pending_rdma_send": 0, 00:12:32.682 "total_send_wrs": 304, 00:12:32.682 "send_doorbell_updates": 154, 00:12:32.682 "total_recv_wrs": 4275, 00:12:32.682 "recv_doorbell_updates": 155 00:12:32.682 }, 00:12:32.682 { 00:12:32.682 "name": "mlx5_1", 00:12:32.682 "polls": 3316354, 00:12:32.682 "idle_polls": 3316354, 00:12:32.682 "completions": 0, 00:12:32.682 "requests": 0, 00:12:32.682 "request_latency": 0, 00:12:32.682 "pending_free_request": 0, 00:12:32.682 "pending_rdma_read": 0, 00:12:32.682 "pending_rdma_write": 0, 00:12:32.682 "pending_rdma_send": 0, 00:12:32.682 "total_send_wrs": 0, 00:12:32.682 "send_doorbell_updates": 0, 00:12:32.682 "total_recv_wrs": 4096, 00:12:32.682 "recv_doorbell_updates": 1 00:12:32.682 } 00:12:32.682 ] 00:12:32.682 } 00:12:32.682 ] 00:12:32.682 } 00:12:32.682 ] 00:12:32.682 }' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:12:32.682 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1284 > 0 )) 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 116938176 > 0 )) 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:32.945 rmmod nvme_rdma 00:12:32.945 rmmod nvme_fabrics 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3477141 ']' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3477141 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3477141 ']' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3477141 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3477141 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3477141' 00:12:32.945 killing process with pid 3477141 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3477141 00:12:32.945 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3477141 00:12:33.205 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.205 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:33.205 00:12:33.205 real 0m43.464s 00:12:33.205 user 2m21.001s 00:12:33.205 sys 0m8.122s 00:12:33.205 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.205 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.205 ************************************ 00:12:33.205 END TEST nvmf_rpc 00:12:33.205 ************************************ 00:12:33.205 17:57:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:33.205 17:57:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:33.205 17:57:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.205 17:57:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.205 ************************************ 00:12:33.205 START TEST nvmf_invalid 00:12:33.205 ************************************ 00:12:33.205 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:33.467 * Looking for test storage... 00:12:33.467 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:33.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.467 --rc genhtml_branch_coverage=1 00:12:33.467 --rc genhtml_function_coverage=1 00:12:33.467 --rc genhtml_legend=1 00:12:33.467 --rc geninfo_all_blocks=1 00:12:33.467 --rc geninfo_unexecuted_blocks=1 00:12:33.467 00:12:33.467 ' 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:33.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.467 --rc genhtml_branch_coverage=1 00:12:33.467 --rc genhtml_function_coverage=1 00:12:33.467 --rc genhtml_legend=1 00:12:33.467 --rc geninfo_all_blocks=1 00:12:33.467 --rc geninfo_unexecuted_blocks=1 00:12:33.467 00:12:33.467 ' 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:33.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.467 --rc genhtml_branch_coverage=1 00:12:33.467 --rc genhtml_function_coverage=1 00:12:33.467 --rc genhtml_legend=1 00:12:33.467 --rc geninfo_all_blocks=1 00:12:33.467 --rc geninfo_unexecuted_blocks=1 00:12:33.467 00:12:33.467 ' 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:33.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.467 --rc genhtml_branch_coverage=1 00:12:33.467 --rc genhtml_function_coverage=1 00:12:33.467 --rc genhtml_legend=1 00:12:33.467 --rc geninfo_all_blocks=1 00:12:33.467 --rc geninfo_unexecuted_blocks=1 00:12:33.467 00:12:33.467 ' 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.467 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.468 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:33.468 17:57:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:41.616 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:41.616 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.616 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:41.617 Found net devices under 0000:98:00.0: mlx_0_0 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:41.617 Found net devices under 0000:98:00.1: mlx_0_1 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:41.617 17:57:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:41.617 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:41.617 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:12:41.617 altname enp152s0f0np0 00:12:41.617 altname ens817f0np0 00:12:41.617 inet 192.168.100.8/24 scope global mlx_0_0 00:12:41.617 valid_lft forever preferred_lft forever 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:41.617 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:41.617 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:12:41.617 altname enp152s0f1np1 00:12:41.617 altname ens817f1np1 00:12:41.617 inet 192.168.100.9/24 scope global mlx_0_1 00:12:41.617 valid_lft forever preferred_lft forever 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.617 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:41.618 192.168.100.9' 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:41.618 192.168.100.9' 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:41.618 192.168.100.9' 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:41.618 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3488302 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3488302 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3488302 ']' 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.880 17:58:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:41.880 [2024-11-20 17:58:00.225370] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:12:41.880 [2024-11-20 17:58:00.225442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.880 [2024-11-20 17:58:00.312742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.880 [2024-11-20 17:58:00.355145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.880 [2024-11-20 17:58:00.355184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.880 [2024-11-20 17:58:00.355192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.880 [2024-11-20 17:58:00.355198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.880 [2024-11-20 17:58:00.355204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.880 [2024-11-20 17:58:00.357012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.880 [2024-11-20 17:58:00.357128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.880 [2024-11-20 17:58:00.357285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.880 [2024-11-20 17:58:00.357285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10882 00:12:42.824 [2024-11-20 17:58:01.234007] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:42.824 { 00:12:42.824 "nqn": "nqn.2016-06.io.spdk:cnode10882", 00:12:42.824 "tgt_name": "foobar", 00:12:42.824 "method": "nvmf_create_subsystem", 00:12:42.824 "req_id": 1 00:12:42.824 } 00:12:42.824 Got JSON-RPC error response 00:12:42.824 response: 00:12:42.824 { 00:12:42.824 "code": -32603, 00:12:42.824 "message": "Unable to find target foobar" 00:12:42.824 }' 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:42.824 { 00:12:42.824 "nqn": "nqn.2016-06.io.spdk:cnode10882", 00:12:42.824 "tgt_name": "foobar", 00:12:42.824 "method": "nvmf_create_subsystem", 00:12:42.824 "req_id": 1 00:12:42.824 } 00:12:42.824 Got JSON-RPC error response 00:12:42.824 response: 00:12:42.824 { 00:12:42.824 "code": -32603, 00:12:42.824 "message": "Unable to find target foobar" 00:12:42.824 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:42.824 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode910 00:12:43.085 [2024-11-20 17:58:01.426685] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode910: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:43.085 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:43.085 { 00:12:43.085 "nqn": "nqn.2016-06.io.spdk:cnode910", 00:12:43.085 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:43.085 "method": "nvmf_create_subsystem", 00:12:43.085 "req_id": 1 00:12:43.085 } 00:12:43.085 Got JSON-RPC error response 00:12:43.085 response: 00:12:43.085 { 00:12:43.085 "code": -32602, 00:12:43.085 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:43.085 }' 00:12:43.085 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:43.085 { 00:12:43.085 "nqn": "nqn.2016-06.io.spdk:cnode910", 00:12:43.085 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:43.085 "method": "nvmf_create_subsystem", 00:12:43.085 "req_id": 1 00:12:43.085 } 00:12:43.085 Got JSON-RPC error response 00:12:43.085 response: 00:12:43.085 { 00:12:43.085 "code": -32602, 00:12:43.085 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:43.085 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:43.085 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:43.085 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode51 00:12:43.085 [2024-11-20 17:58:01.619230] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode51: invalid model number 'SPDK_Controller' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:43.348 { 00:12:43.348 "nqn": "nqn.2016-06.io.spdk:cnode51", 00:12:43.348 "model_number": "SPDK_Controller\u001f", 00:12:43.348 "method": "nvmf_create_subsystem", 00:12:43.348 "req_id": 1 00:12:43.348 } 00:12:43.348 Got JSON-RPC error response 00:12:43.348 response: 00:12:43.348 { 00:12:43.348 "code": -32602, 00:12:43.348 "message": "Invalid MN SPDK_Controller\u001f" 00:12:43.348 }' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:43.348 { 00:12:43.348 "nqn": "nqn.2016-06.io.spdk:cnode51", 00:12:43.348 "model_number": "SPDK_Controller\u001f", 00:12:43.348 "method": "nvmf_create_subsystem", 00:12:43.348 "req_id": 1 00:12:43.348 } 00:12:43.348 Got JSON-RPC error response 00:12:43.348 response: 00:12:43.348 { 00:12:43.348 "code": -32602, 00:12:43.348 "message": "Invalid MN SPDK_Controller\u001f" 00:12:43.348 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.348 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ M == \- ]] 00:12:43.349 17:58:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'M!2](TIA1I4;I"B;]F3)8/,s&#&jzDB]4,n+o' 00:12:43.880 17:58:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '{yt%?h/2b$>A1I4;I"B;]F3)8/,s&#&jzDB]4,n+o' nqn.2016-06.io.spdk:cnode17625 00:12:44.199 [2024-11-20 17:58:02.481962] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17625: invalid model number '{yt%?h/2b$>A1I4;I"B;]F3)8/,s&#&jzDB]4,n+o' 00:12:44.199 17:58:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:44.199 { 00:12:44.199 "nqn": "nqn.2016-06.io.spdk:cnode17625", 00:12:44.199 "model_number": "{yt%?h/2b$>A1I4;I\"B;]F3)8/,s&#&jzDB]4,n+o", 00:12:44.199 "method": "nvmf_create_subsystem", 00:12:44.199 "req_id": 1 00:12:44.199 } 00:12:44.199 Got JSON-RPC error response 00:12:44.199 response: 00:12:44.199 { 00:12:44.199 "code": -32602, 00:12:44.199 "message": "Invalid MN {yt%?h/2b$>A1I4;I\"B;]F3)8/,s&#&jzDB]4,n+o" 00:12:44.199 }' 00:12:44.199 17:58:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:44.199 { 00:12:44.199 "nqn": "nqn.2016-06.io.spdk:cnode17625", 00:12:44.199 "model_number": "{yt%?h/2b$>A1I4;I\"B;]F3)8/,s&#&jzDB]4,n+o", 00:12:44.199 "method": "nvmf_create_subsystem", 00:12:44.199 "req_id": 1 00:12:44.199 } 00:12:44.199 Got JSON-RPC error response 00:12:44.199 response: 00:12:44.199 { 00:12:44.199 "code": -32602, 00:12:44.199 "message": "Invalid MN {yt%?h/2b$>A1I4;I\"B;]F3)8/,s&#&jzDB]4,n+o" 00:12:44.199 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:44.199 17:58:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:12:44.199 [2024-11-20 17:58:02.699837] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1501810/0x1505d00) succeed. 00:12:44.199 [2024-11-20 17:58:02.716174] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1502ea0/0x15473a0) succeed. 00:12:44.512 17:58:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:44.773 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:12:44.773 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:12:44.773 192.168.100.9' 00:12:44.773 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:44.773 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:12:44.773 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:12:44.773 [2024-11-20 17:58:03.223842] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:44.773 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:44.773 { 00:12:44.773 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:44.773 "listen_address": { 00:12:44.773 "trtype": "rdma", 00:12:44.773 "traddr": "192.168.100.8", 00:12:44.773 "trsvcid": "4421" 00:12:44.773 }, 00:12:44.773 "method": "nvmf_subsystem_remove_listener", 00:12:44.773 "req_id": 1 00:12:44.773 } 00:12:44.773 Got JSON-RPC error response 00:12:44.773 response: 00:12:44.773 { 00:12:44.773 "code": -32602, 00:12:44.773 "message": "Invalid parameters" 00:12:44.773 }' 00:12:44.773 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:44.773 { 00:12:44.773 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:44.773 "listen_address": { 00:12:44.773 "trtype": "rdma", 00:12:44.773 "traddr": "192.168.100.8", 00:12:44.773 "trsvcid": "4421" 00:12:44.773 }, 00:12:44.773 "method": "nvmf_subsystem_remove_listener", 00:12:44.773 "req_id": 1 00:12:44.773 } 00:12:44.773 Got JSON-RPC error response 00:12:44.773 response: 00:12:44.773 { 00:12:44.773 "code": -32602, 00:12:44.773 "message": "Invalid parameters" 00:12:44.773 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:44.773 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode683 -i 0 00:12:45.035 [2024-11-20 17:58:03.412458] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode683: invalid cntlid range [0-65519] 00:12:45.035 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:45.035 { 00:12:45.035 "nqn": "nqn.2016-06.io.spdk:cnode683", 00:12:45.035 "min_cntlid": 0, 00:12:45.035 "method": "nvmf_create_subsystem", 00:12:45.035 "req_id": 1 00:12:45.035 } 00:12:45.035 Got JSON-RPC error response 00:12:45.035 response: 00:12:45.035 { 00:12:45.035 "code": -32602, 00:12:45.035 "message": "Invalid cntlid range [0-65519]" 00:12:45.035 }' 00:12:45.035 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:45.035 { 00:12:45.035 "nqn": "nqn.2016-06.io.spdk:cnode683", 00:12:45.035 "min_cntlid": 0, 00:12:45.035 "method": "nvmf_create_subsystem", 00:12:45.035 "req_id": 1 00:12:45.035 } 00:12:45.035 Got JSON-RPC error response 00:12:45.035 response: 00:12:45.035 { 00:12:45.035 "code": -32602, 00:12:45.035 "message": "Invalid cntlid range [0-65519]" 00:12:45.035 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.035 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9818 -i 65520 00:12:45.296 [2024-11-20 17:58:03.593114] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9818: invalid cntlid range [65520-65519] 00:12:45.296 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:45.296 { 00:12:45.296 "nqn": "nqn.2016-06.io.spdk:cnode9818", 00:12:45.296 "min_cntlid": 65520, 00:12:45.296 "method": "nvmf_create_subsystem", 00:12:45.296 "req_id": 1 00:12:45.296 } 00:12:45.296 Got JSON-RPC error response 00:12:45.296 response: 00:12:45.296 { 00:12:45.296 "code": -32602, 00:12:45.296 "message": "Invalid cntlid range [65520-65519]" 00:12:45.296 }' 00:12:45.296 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:45.296 { 00:12:45.296 "nqn": "nqn.2016-06.io.spdk:cnode9818", 00:12:45.296 "min_cntlid": 65520, 00:12:45.296 "method": "nvmf_create_subsystem", 00:12:45.296 "req_id": 1 00:12:45.296 } 00:12:45.296 Got JSON-RPC error response 00:12:45.296 response: 00:12:45.296 { 00:12:45.296 "code": -32602, 00:12:45.296 "message": "Invalid cntlid range [65520-65519]" 00:12:45.296 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.296 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17629 -I 0 00:12:45.296 [2024-11-20 17:58:03.777773] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17629: invalid cntlid range [1-0] 00:12:45.296 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:45.296 { 00:12:45.296 "nqn": "nqn.2016-06.io.spdk:cnode17629", 00:12:45.296 "max_cntlid": 0, 00:12:45.296 "method": "nvmf_create_subsystem", 00:12:45.296 "req_id": 1 00:12:45.296 } 00:12:45.296 Got JSON-RPC error response 00:12:45.296 response: 00:12:45.296 { 00:12:45.296 "code": -32602, 00:12:45.296 "message": "Invalid cntlid range [1-0]" 00:12:45.296 }' 00:12:45.296 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:45.296 { 00:12:45.296 "nqn": "nqn.2016-06.io.spdk:cnode17629", 00:12:45.296 "max_cntlid": 0, 00:12:45.296 "method": "nvmf_create_subsystem", 00:12:45.296 "req_id": 1 00:12:45.296 } 00:12:45.296 Got JSON-RPC error response 00:12:45.296 response: 00:12:45.296 { 00:12:45.296 "code": -32602, 00:12:45.296 "message": "Invalid cntlid range [1-0]" 00:12:45.296 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.296 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20767 -I 65520 00:12:45.558 [2024-11-20 17:58:03.966465] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20767: invalid cntlid range [1-65520] 00:12:45.558 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:45.558 { 00:12:45.558 "nqn": "nqn.2016-06.io.spdk:cnode20767", 00:12:45.558 "max_cntlid": 65520, 00:12:45.558 "method": "nvmf_create_subsystem", 00:12:45.558 "req_id": 1 00:12:45.558 } 00:12:45.558 Got JSON-RPC error response 00:12:45.558 response: 00:12:45.558 { 00:12:45.558 "code": -32602, 00:12:45.558 "message": "Invalid cntlid range [1-65520]" 00:12:45.558 }' 00:12:45.558 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:45.558 { 00:12:45.558 "nqn": "nqn.2016-06.io.spdk:cnode20767", 00:12:45.558 "max_cntlid": 65520, 00:12:45.558 "method": "nvmf_create_subsystem", 00:12:45.558 "req_id": 1 00:12:45.558 } 00:12:45.558 Got JSON-RPC error response 00:12:45.558 response: 00:12:45.558 { 00:12:45.558 "code": -32602, 00:12:45.558 "message": "Invalid cntlid range [1-65520]" 00:12:45.558 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.558 17:58:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22555 -i 6 -I 5 00:12:45.820 [2024-11-20 17:58:04.151113] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22555: invalid cntlid range [6-5] 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:45.820 { 00:12:45.820 "nqn": "nqn.2016-06.io.spdk:cnode22555", 00:12:45.820 "min_cntlid": 6, 00:12:45.820 "max_cntlid": 5, 00:12:45.820 "method": "nvmf_create_subsystem", 00:12:45.820 "req_id": 1 00:12:45.820 } 00:12:45.820 Got JSON-RPC error response 00:12:45.820 response: 00:12:45.820 { 00:12:45.820 "code": -32602, 00:12:45.820 "message": "Invalid cntlid range [6-5]" 00:12:45.820 }' 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:45.820 { 00:12:45.820 "nqn": "nqn.2016-06.io.spdk:cnode22555", 00:12:45.820 "min_cntlid": 6, 00:12:45.820 "max_cntlid": 5, 00:12:45.820 "method": "nvmf_create_subsystem", 00:12:45.820 "req_id": 1 00:12:45.820 } 00:12:45.820 Got JSON-RPC error response 00:12:45.820 response: 00:12:45.820 { 00:12:45.820 "code": -32602, 00:12:45.820 "message": "Invalid cntlid range [6-5]" 00:12:45.820 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:45.820 { 00:12:45.820 "name": "foobar", 00:12:45.820 "method": "nvmf_delete_target", 00:12:45.820 "req_id": 1 00:12:45.820 } 00:12:45.820 Got JSON-RPC error response 00:12:45.820 response: 00:12:45.820 { 00:12:45.820 "code": -32602, 00:12:45.820 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:45.820 }' 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:45.820 { 00:12:45.820 "name": "foobar", 00:12:45.820 "method": "nvmf_delete_target", 00:12:45.820 "req_id": 1 00:12:45.820 } 00:12:45.820 Got JSON-RPC error response 00:12:45.820 response: 00:12:45.820 { 00:12:45.820 "code": -32602, 00:12:45.820 "message": "The specified target doesn't exist, cannot delete it." 00:12:45.820 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:45.820 rmmod nvme_rdma 00:12:45.820 rmmod nvme_fabrics 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3488302 ']' 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3488302 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3488302 ']' 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3488302 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.820 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3488302 00:12:46.081 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.081 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.081 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3488302' 00:12:46.081 killing process with pid 3488302 00:12:46.081 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3488302 00:12:46.081 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3488302 00:12:46.081 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.081 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:46.081 00:12:46.081 real 0m12.915s 00:12:46.081 user 0m21.725s 00:12:46.081 sys 0m7.148s 00:12:46.081 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.081 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:46.081 ************************************ 00:12:46.081 END TEST nvmf_invalid 00:12:46.081 ************************************ 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.343 ************************************ 00:12:46.343 START TEST nvmf_connect_stress 00:12:46.343 ************************************ 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:46.343 * Looking for test storage... 00:12:46.343 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:46.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.343 --rc genhtml_branch_coverage=1 00:12:46.343 --rc genhtml_function_coverage=1 00:12:46.343 --rc genhtml_legend=1 00:12:46.343 --rc geninfo_all_blocks=1 00:12:46.343 --rc geninfo_unexecuted_blocks=1 00:12:46.343 00:12:46.343 ' 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:46.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.343 --rc genhtml_branch_coverage=1 00:12:46.343 --rc genhtml_function_coverage=1 00:12:46.343 --rc genhtml_legend=1 00:12:46.343 --rc geninfo_all_blocks=1 00:12:46.343 --rc geninfo_unexecuted_blocks=1 00:12:46.343 00:12:46.343 ' 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:46.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.343 --rc genhtml_branch_coverage=1 00:12:46.343 --rc genhtml_function_coverage=1 00:12:46.343 --rc genhtml_legend=1 00:12:46.343 --rc geninfo_all_blocks=1 00:12:46.343 --rc geninfo_unexecuted_blocks=1 00:12:46.343 00:12:46.343 ' 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:46.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.343 --rc genhtml_branch_coverage=1 00:12:46.343 --rc genhtml_function_coverage=1 00:12:46.343 --rc genhtml_legend=1 00:12:46.343 --rc geninfo_all_blocks=1 00:12:46.343 --rc geninfo_unexecuted_blocks=1 00:12:46.343 00:12:46.343 ' 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.343 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.344 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:46.344 17:58:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:54.486 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:54.486 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:54.486 Found net devices under 0000:98:00.0: mlx_0_0 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:54.486 Found net devices under 0000:98:00.1: mlx_0_1 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.486 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:54.487 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:54.487 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:12:54.487 altname enp152s0f0np0 00:12:54.487 altname ens817f0np0 00:12:54.487 inet 192.168.100.8/24 scope global mlx_0_0 00:12:54.487 valid_lft forever preferred_lft forever 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:54.487 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:54.487 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:12:54.487 altname enp152s0f1np1 00:12:54.487 altname ens817f1np1 00:12:54.487 inet 192.168.100.9/24 scope global mlx_0_1 00:12:54.487 valid_lft forever preferred_lft forever 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:54.487 192.168.100.9' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:54.487 192.168.100.9' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:54.487 192.168.100.9' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:54.487 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3493572 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3493572 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3493572 ']' 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.488 17:58:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.488 [2024-11-20 17:58:12.799678] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:12:54.488 [2024-11-20 17:58:12.799748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.488 [2024-11-20 17:58:12.901286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:54.488 [2024-11-20 17:58:12.952620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.488 [2024-11-20 17:58:12.952672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.488 [2024-11-20 17:58:12.952681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.488 [2024-11-20 17:58:12.952688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.488 [2024-11-20 17:58:12.952694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.488 [2024-11-20 17:58:12.954672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.488 [2024-11-20 17:58:12.954836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.488 [2024-11-20 17:58:12.954836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.432 [2024-11-20 17:58:13.694476] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x881650/0x885b40) succeed. 00:12:55.432 [2024-11-20 17:58:13.708847] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x882c40/0x8c71e0) succeed. 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.432 [2024-11-20 17:58:13.828194] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.432 NULL1 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3493644 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.432 17:58:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.004 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.004 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:56.004 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.004 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.004 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.264 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.264 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:56.264 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.264 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.264 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.524 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.524 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:56.524 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.524 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.524 17:58:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.784 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.784 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:56.784 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.784 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.784 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.355 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.355 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:57.355 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.355 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.355 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.616 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.616 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:57.616 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.616 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.616 17:58:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.877 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.877 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:57.877 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.877 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.877 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.138 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.139 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:58.139 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.139 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.139 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.399 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.399 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:58.400 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.400 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.400 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.971 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.971 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:58.971 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.971 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.971 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.232 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.232 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:59.232 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.232 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.232 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.495 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.495 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:59.495 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.495 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.495 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.756 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.756 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:12:59.756 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.756 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.756 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.329 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.329 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:00.329 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.329 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.329 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.589 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.589 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:00.589 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.589 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.589 17:58:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.850 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.850 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:00.850 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.850 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.850 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.111 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.111 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:01.111 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.112 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.112 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.372 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.372 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:01.372 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.372 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.372 17:58:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.945 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.945 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:01.945 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.945 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.945 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.206 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.207 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:02.207 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.207 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.207 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.468 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.468 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:02.468 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.468 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.468 17:58:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.729 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.729 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:02.729 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.729 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.729 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.990 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.990 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:02.990 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.991 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.991 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.562 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.562 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:03.562 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.562 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.562 17:58:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.823 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.823 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:03.823 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.823 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.823 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.084 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.084 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:04.084 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.084 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.084 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.346 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.346 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:04.346 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.346 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.346 17:58:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.918 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.918 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:04.918 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.918 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.918 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.179 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.179 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:05.179 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.179 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.179 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.440 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.440 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:05.440 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.440 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.440 17:58:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.440 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3493644 00:13:05.702 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3493644) - No such process 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3493644 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:05.702 rmmod nvme_rdma 00:13:05.702 rmmod nvme_fabrics 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3493572 ']' 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3493572 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3493572 ']' 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3493572 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.702 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3493572 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3493572' 00:13:05.963 killing process with pid 3493572 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3493572 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3493572 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:05.963 00:13:05.963 real 0m19.776s 00:13:05.963 user 0m41.960s 00:13:05.963 sys 0m7.706s 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.963 ************************************ 00:13:05.963 END TEST nvmf_connect_stress 00:13:05.963 ************************************ 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.963 17:58:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:06.226 ************************************ 00:13:06.226 START TEST nvmf_fused_ordering 00:13:06.226 ************************************ 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:06.226 * Looking for test storage... 00:13:06.226 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:06.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.226 --rc genhtml_branch_coverage=1 00:13:06.226 --rc genhtml_function_coverage=1 00:13:06.226 --rc genhtml_legend=1 00:13:06.226 --rc geninfo_all_blocks=1 00:13:06.226 --rc geninfo_unexecuted_blocks=1 00:13:06.226 00:13:06.226 ' 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:06.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.226 --rc genhtml_branch_coverage=1 00:13:06.226 --rc genhtml_function_coverage=1 00:13:06.226 --rc genhtml_legend=1 00:13:06.226 --rc geninfo_all_blocks=1 00:13:06.226 --rc geninfo_unexecuted_blocks=1 00:13:06.226 00:13:06.226 ' 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:06.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.226 --rc genhtml_branch_coverage=1 00:13:06.226 --rc genhtml_function_coverage=1 00:13:06.226 --rc genhtml_legend=1 00:13:06.226 --rc geninfo_all_blocks=1 00:13:06.226 --rc geninfo_unexecuted_blocks=1 00:13:06.226 00:13:06.226 ' 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:06.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.226 --rc genhtml_branch_coverage=1 00:13:06.226 --rc genhtml_function_coverage=1 00:13:06.226 --rc genhtml_legend=1 00:13:06.226 --rc geninfo_all_blocks=1 00:13:06.226 --rc geninfo_unexecuted_blocks=1 00:13:06.226 00:13:06.226 ' 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.226 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:06.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:06.227 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:14.374 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:14.374 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.374 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:14.375 Found net devices under 0000:98:00.0: mlx_0_0 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:14.375 Found net devices under 0000:98:00.1: mlx_0_1 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:14.375 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:14.375 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:13:14.375 altname enp152s0f0np0 00:13:14.375 altname ens817f0np0 00:13:14.375 inet 192.168.100.8/24 scope global mlx_0_0 00:13:14.375 valid_lft forever preferred_lft forever 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:14.375 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:14.637 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:14.637 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:13:14.637 altname enp152s0f1np1 00:13:14.637 altname ens817f1np1 00:13:14.637 inet 192.168.100.9/24 scope global mlx_0_1 00:13:14.637 valid_lft forever preferred_lft forever 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:14.637 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:14.638 192.168.100.9' 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:14.638 192.168.100.9' 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:14.638 192.168.100.9' 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:14.638 17:58:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3500007 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3500007 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3500007 ']' 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.638 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:14.638 [2024-11-20 17:58:33.078788] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:14.638 [2024-11-20 17:58:33.078841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.900 [2024-11-20 17:58:33.177341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.900 [2024-11-20 17:58:33.224320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.900 [2024-11-20 17:58:33.224375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.900 [2024-11-20 17:58:33.224383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.900 [2024-11-20 17:58:33.224391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.900 [2024-11-20 17:58:33.224397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.900 [2024-11-20 17:58:33.225192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.472 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.472 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:15.472 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.472 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.472 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.472 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.472 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:15.472 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.472 17:58:33 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.472 [2024-11-20 17:58:33.972328] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a0df90/0x1a12480) succeed. 00:13:15.472 [2024-11-20 17:58:33.985901] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a0f440/0x1a53b20) succeed. 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.733 [2024-11-20 17:58:34.045839] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.733 NULL1 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.733 17:58:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:15.733 [2024-11-20 17:58:34.115909] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:15.733 [2024-11-20 17:58:34.115954] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500333 ] 00:13:15.994 Attached to nqn.2016-06.io.spdk:cnode1 00:13:15.994 Namespace ID: 1 size: 1GB 00:13:15.994 fused_ordering(0) 00:13:15.994 fused_ordering(1) 00:13:15.994 fused_ordering(2) 00:13:15.994 fused_ordering(3) 00:13:15.994 fused_ordering(4) 00:13:15.994 fused_ordering(5) 00:13:15.994 fused_ordering(6) 00:13:15.994 fused_ordering(7) 00:13:15.994 fused_ordering(8) 00:13:15.994 fused_ordering(9) 00:13:15.994 fused_ordering(10) 00:13:15.994 fused_ordering(11) 00:13:15.994 fused_ordering(12) 00:13:15.994 fused_ordering(13) 00:13:15.994 fused_ordering(14) 00:13:15.994 fused_ordering(15) 00:13:15.994 fused_ordering(16) 00:13:15.994 fused_ordering(17) 00:13:15.994 fused_ordering(18) 00:13:15.994 fused_ordering(19) 00:13:15.994 fused_ordering(20) 00:13:15.994 fused_ordering(21) 00:13:15.994 fused_ordering(22) 00:13:15.994 fused_ordering(23) 00:13:15.994 fused_ordering(24) 00:13:15.994 fused_ordering(25) 00:13:15.994 fused_ordering(26) 00:13:15.994 fused_ordering(27) 00:13:15.994 fused_ordering(28) 00:13:15.994 fused_ordering(29) 00:13:15.994 fused_ordering(30) 00:13:15.994 fused_ordering(31) 00:13:15.994 fused_ordering(32) 00:13:15.994 fused_ordering(33) 00:13:15.994 fused_ordering(34) 00:13:15.994 fused_ordering(35) 00:13:15.995 fused_ordering(36) 00:13:15.995 fused_ordering(37) 00:13:15.995 fused_ordering(38) 00:13:15.995 fused_ordering(39) 00:13:15.995 fused_ordering(40) 00:13:15.995 fused_ordering(41) 00:13:15.995 fused_ordering(42) 00:13:15.995 fused_ordering(43) 00:13:15.995 fused_ordering(44) 00:13:15.995 fused_ordering(45) 00:13:15.995 fused_ordering(46) 00:13:15.995 fused_ordering(47) 00:13:15.995 fused_ordering(48) 00:13:15.995 fused_ordering(49) 00:13:15.995 fused_ordering(50) 00:13:15.995 fused_ordering(51) 00:13:15.995 fused_ordering(52) 00:13:15.995 fused_ordering(53) 00:13:15.995 fused_ordering(54) 00:13:15.995 fused_ordering(55) 00:13:15.995 fused_ordering(56) 00:13:15.995 fused_ordering(57) 00:13:15.995 fused_ordering(58) 00:13:15.995 fused_ordering(59) 00:13:15.995 fused_ordering(60) 00:13:15.995 fused_ordering(61) 00:13:15.995 fused_ordering(62) 00:13:15.995 fused_ordering(63) 00:13:15.995 fused_ordering(64) 00:13:15.995 fused_ordering(65) 00:13:15.995 fused_ordering(66) 00:13:15.995 fused_ordering(67) 00:13:15.995 fused_ordering(68) 00:13:15.995 fused_ordering(69) 00:13:15.995 fused_ordering(70) 00:13:15.995 fused_ordering(71) 00:13:15.995 fused_ordering(72) 00:13:15.995 fused_ordering(73) 00:13:15.995 fused_ordering(74) 00:13:15.995 fused_ordering(75) 00:13:15.995 fused_ordering(76) 00:13:15.995 fused_ordering(77) 00:13:15.995 fused_ordering(78) 00:13:15.995 fused_ordering(79) 00:13:15.995 fused_ordering(80) 00:13:15.995 fused_ordering(81) 00:13:15.995 fused_ordering(82) 00:13:15.995 fused_ordering(83) 00:13:15.995 fused_ordering(84) 00:13:15.995 fused_ordering(85) 00:13:15.995 fused_ordering(86) 00:13:15.995 fused_ordering(87) 00:13:15.995 fused_ordering(88) 00:13:15.995 fused_ordering(89) 00:13:15.995 fused_ordering(90) 00:13:15.995 fused_ordering(91) 00:13:15.995 fused_ordering(92) 00:13:15.995 fused_ordering(93) 00:13:15.995 fused_ordering(94) 00:13:15.995 fused_ordering(95) 00:13:15.995 fused_ordering(96) 00:13:15.995 fused_ordering(97) 00:13:15.995 fused_ordering(98) 00:13:15.995 fused_ordering(99) 00:13:15.995 fused_ordering(100) 00:13:15.995 fused_ordering(101) 00:13:15.995 fused_ordering(102) 00:13:15.995 fused_ordering(103) 00:13:15.995 fused_ordering(104) 00:13:15.995 fused_ordering(105) 00:13:15.995 fused_ordering(106) 00:13:15.995 fused_ordering(107) 00:13:15.995 fused_ordering(108) 00:13:15.995 fused_ordering(109) 00:13:15.995 fused_ordering(110) 00:13:15.995 fused_ordering(111) 00:13:15.995 fused_ordering(112) 00:13:15.995 fused_ordering(113) 00:13:15.995 fused_ordering(114) 00:13:15.995 fused_ordering(115) 00:13:15.995 fused_ordering(116) 00:13:15.995 fused_ordering(117) 00:13:15.995 fused_ordering(118) 00:13:15.995 fused_ordering(119) 00:13:15.995 fused_ordering(120) 00:13:15.995 fused_ordering(121) 00:13:15.995 fused_ordering(122) 00:13:15.995 fused_ordering(123) 00:13:15.995 fused_ordering(124) 00:13:15.995 fused_ordering(125) 00:13:15.995 fused_ordering(126) 00:13:15.995 fused_ordering(127) 00:13:15.995 fused_ordering(128) 00:13:15.995 fused_ordering(129) 00:13:15.995 fused_ordering(130) 00:13:15.995 fused_ordering(131) 00:13:15.995 fused_ordering(132) 00:13:15.995 fused_ordering(133) 00:13:15.995 fused_ordering(134) 00:13:15.995 fused_ordering(135) 00:13:15.995 fused_ordering(136) 00:13:15.995 fused_ordering(137) 00:13:15.995 fused_ordering(138) 00:13:15.995 fused_ordering(139) 00:13:15.995 fused_ordering(140) 00:13:15.995 fused_ordering(141) 00:13:15.995 fused_ordering(142) 00:13:15.995 fused_ordering(143) 00:13:15.995 fused_ordering(144) 00:13:15.995 fused_ordering(145) 00:13:15.995 fused_ordering(146) 00:13:15.995 fused_ordering(147) 00:13:15.995 fused_ordering(148) 00:13:15.995 fused_ordering(149) 00:13:15.995 fused_ordering(150) 00:13:15.995 fused_ordering(151) 00:13:15.995 fused_ordering(152) 00:13:15.995 fused_ordering(153) 00:13:15.995 fused_ordering(154) 00:13:15.995 fused_ordering(155) 00:13:15.995 fused_ordering(156) 00:13:15.995 fused_ordering(157) 00:13:15.995 fused_ordering(158) 00:13:15.995 fused_ordering(159) 00:13:15.995 fused_ordering(160) 00:13:15.995 fused_ordering(161) 00:13:15.995 fused_ordering(162) 00:13:15.995 fused_ordering(163) 00:13:15.995 fused_ordering(164) 00:13:15.995 fused_ordering(165) 00:13:15.995 fused_ordering(166) 00:13:15.995 fused_ordering(167) 00:13:15.995 fused_ordering(168) 00:13:15.995 fused_ordering(169) 00:13:15.995 fused_ordering(170) 00:13:15.995 fused_ordering(171) 00:13:15.995 fused_ordering(172) 00:13:15.995 fused_ordering(173) 00:13:15.995 fused_ordering(174) 00:13:15.995 fused_ordering(175) 00:13:15.995 fused_ordering(176) 00:13:15.995 fused_ordering(177) 00:13:15.995 fused_ordering(178) 00:13:15.995 fused_ordering(179) 00:13:15.995 fused_ordering(180) 00:13:15.995 fused_ordering(181) 00:13:15.995 fused_ordering(182) 00:13:15.995 fused_ordering(183) 00:13:15.995 fused_ordering(184) 00:13:15.995 fused_ordering(185) 00:13:15.995 fused_ordering(186) 00:13:15.995 fused_ordering(187) 00:13:15.995 fused_ordering(188) 00:13:15.995 fused_ordering(189) 00:13:15.995 fused_ordering(190) 00:13:15.995 fused_ordering(191) 00:13:15.995 fused_ordering(192) 00:13:15.995 fused_ordering(193) 00:13:15.995 fused_ordering(194) 00:13:15.995 fused_ordering(195) 00:13:15.995 fused_ordering(196) 00:13:15.995 fused_ordering(197) 00:13:15.995 fused_ordering(198) 00:13:15.995 fused_ordering(199) 00:13:15.995 fused_ordering(200) 00:13:15.995 fused_ordering(201) 00:13:15.995 fused_ordering(202) 00:13:15.995 fused_ordering(203) 00:13:15.995 fused_ordering(204) 00:13:15.995 fused_ordering(205) 00:13:15.995 fused_ordering(206) 00:13:15.995 fused_ordering(207) 00:13:15.995 fused_ordering(208) 00:13:15.995 fused_ordering(209) 00:13:15.995 fused_ordering(210) 00:13:15.995 fused_ordering(211) 00:13:15.995 fused_ordering(212) 00:13:15.995 fused_ordering(213) 00:13:15.995 fused_ordering(214) 00:13:15.995 fused_ordering(215) 00:13:15.995 fused_ordering(216) 00:13:15.995 fused_ordering(217) 00:13:15.995 fused_ordering(218) 00:13:15.995 fused_ordering(219) 00:13:15.995 fused_ordering(220) 00:13:15.995 fused_ordering(221) 00:13:15.995 fused_ordering(222) 00:13:15.995 fused_ordering(223) 00:13:15.995 fused_ordering(224) 00:13:15.995 fused_ordering(225) 00:13:15.995 fused_ordering(226) 00:13:15.995 fused_ordering(227) 00:13:15.995 fused_ordering(228) 00:13:15.995 fused_ordering(229) 00:13:15.995 fused_ordering(230) 00:13:15.995 fused_ordering(231) 00:13:15.995 fused_ordering(232) 00:13:15.995 fused_ordering(233) 00:13:15.995 fused_ordering(234) 00:13:15.995 fused_ordering(235) 00:13:15.995 fused_ordering(236) 00:13:15.995 fused_ordering(237) 00:13:15.995 fused_ordering(238) 00:13:15.995 fused_ordering(239) 00:13:15.995 fused_ordering(240) 00:13:15.995 fused_ordering(241) 00:13:15.995 fused_ordering(242) 00:13:15.995 fused_ordering(243) 00:13:15.995 fused_ordering(244) 00:13:15.995 fused_ordering(245) 00:13:15.995 fused_ordering(246) 00:13:15.995 fused_ordering(247) 00:13:15.995 fused_ordering(248) 00:13:15.995 fused_ordering(249) 00:13:15.995 fused_ordering(250) 00:13:15.995 fused_ordering(251) 00:13:15.995 fused_ordering(252) 00:13:15.995 fused_ordering(253) 00:13:15.995 fused_ordering(254) 00:13:15.995 fused_ordering(255) 00:13:15.995 fused_ordering(256) 00:13:15.995 fused_ordering(257) 00:13:15.995 fused_ordering(258) 00:13:15.995 fused_ordering(259) 00:13:15.995 fused_ordering(260) 00:13:15.995 fused_ordering(261) 00:13:15.995 fused_ordering(262) 00:13:15.995 fused_ordering(263) 00:13:15.995 fused_ordering(264) 00:13:15.995 fused_ordering(265) 00:13:15.995 fused_ordering(266) 00:13:15.995 fused_ordering(267) 00:13:15.995 fused_ordering(268) 00:13:15.995 fused_ordering(269) 00:13:15.995 fused_ordering(270) 00:13:15.995 fused_ordering(271) 00:13:15.995 fused_ordering(272) 00:13:15.995 fused_ordering(273) 00:13:15.995 fused_ordering(274) 00:13:15.995 fused_ordering(275) 00:13:15.995 fused_ordering(276) 00:13:15.995 fused_ordering(277) 00:13:15.995 fused_ordering(278) 00:13:15.995 fused_ordering(279) 00:13:15.995 fused_ordering(280) 00:13:15.995 fused_ordering(281) 00:13:15.995 fused_ordering(282) 00:13:15.995 fused_ordering(283) 00:13:15.995 fused_ordering(284) 00:13:15.995 fused_ordering(285) 00:13:15.995 fused_ordering(286) 00:13:15.995 fused_ordering(287) 00:13:15.995 fused_ordering(288) 00:13:15.995 fused_ordering(289) 00:13:15.995 fused_ordering(290) 00:13:15.995 fused_ordering(291) 00:13:15.995 fused_ordering(292) 00:13:15.995 fused_ordering(293) 00:13:15.995 fused_ordering(294) 00:13:15.995 fused_ordering(295) 00:13:15.995 fused_ordering(296) 00:13:15.995 fused_ordering(297) 00:13:15.996 fused_ordering(298) 00:13:15.996 fused_ordering(299) 00:13:15.996 fused_ordering(300) 00:13:15.996 fused_ordering(301) 00:13:15.996 fused_ordering(302) 00:13:15.996 fused_ordering(303) 00:13:15.996 fused_ordering(304) 00:13:15.996 fused_ordering(305) 00:13:15.996 fused_ordering(306) 00:13:15.996 fused_ordering(307) 00:13:15.996 fused_ordering(308) 00:13:15.996 fused_ordering(309) 00:13:15.996 fused_ordering(310) 00:13:15.996 fused_ordering(311) 00:13:15.996 fused_ordering(312) 00:13:15.996 fused_ordering(313) 00:13:15.996 fused_ordering(314) 00:13:15.996 fused_ordering(315) 00:13:15.996 fused_ordering(316) 00:13:15.996 fused_ordering(317) 00:13:15.996 fused_ordering(318) 00:13:15.996 fused_ordering(319) 00:13:15.996 fused_ordering(320) 00:13:15.996 fused_ordering(321) 00:13:15.996 fused_ordering(322) 00:13:15.996 fused_ordering(323) 00:13:15.996 fused_ordering(324) 00:13:15.996 fused_ordering(325) 00:13:15.996 fused_ordering(326) 00:13:15.996 fused_ordering(327) 00:13:15.996 fused_ordering(328) 00:13:15.996 fused_ordering(329) 00:13:15.996 fused_ordering(330) 00:13:15.996 fused_ordering(331) 00:13:15.996 fused_ordering(332) 00:13:15.996 fused_ordering(333) 00:13:15.996 fused_ordering(334) 00:13:15.996 fused_ordering(335) 00:13:15.996 fused_ordering(336) 00:13:15.996 fused_ordering(337) 00:13:15.996 fused_ordering(338) 00:13:15.996 fused_ordering(339) 00:13:15.996 fused_ordering(340) 00:13:15.996 fused_ordering(341) 00:13:15.996 fused_ordering(342) 00:13:15.996 fused_ordering(343) 00:13:15.996 fused_ordering(344) 00:13:15.996 fused_ordering(345) 00:13:15.996 fused_ordering(346) 00:13:15.996 fused_ordering(347) 00:13:15.996 fused_ordering(348) 00:13:15.996 fused_ordering(349) 00:13:15.996 fused_ordering(350) 00:13:15.996 fused_ordering(351) 00:13:15.996 fused_ordering(352) 00:13:15.996 fused_ordering(353) 00:13:15.996 fused_ordering(354) 00:13:15.996 fused_ordering(355) 00:13:15.996 fused_ordering(356) 00:13:15.996 fused_ordering(357) 00:13:15.996 fused_ordering(358) 00:13:15.996 fused_ordering(359) 00:13:15.996 fused_ordering(360) 00:13:15.996 fused_ordering(361) 00:13:15.996 fused_ordering(362) 00:13:15.996 fused_ordering(363) 00:13:15.996 fused_ordering(364) 00:13:15.996 fused_ordering(365) 00:13:15.996 fused_ordering(366) 00:13:15.996 fused_ordering(367) 00:13:15.996 fused_ordering(368) 00:13:15.996 fused_ordering(369) 00:13:15.996 fused_ordering(370) 00:13:15.996 fused_ordering(371) 00:13:15.996 fused_ordering(372) 00:13:15.996 fused_ordering(373) 00:13:15.996 fused_ordering(374) 00:13:15.996 fused_ordering(375) 00:13:15.996 fused_ordering(376) 00:13:15.996 fused_ordering(377) 00:13:15.996 fused_ordering(378) 00:13:15.996 fused_ordering(379) 00:13:15.996 fused_ordering(380) 00:13:15.996 fused_ordering(381) 00:13:15.996 fused_ordering(382) 00:13:15.996 fused_ordering(383) 00:13:15.996 fused_ordering(384) 00:13:15.996 fused_ordering(385) 00:13:15.996 fused_ordering(386) 00:13:15.996 fused_ordering(387) 00:13:15.996 fused_ordering(388) 00:13:15.996 fused_ordering(389) 00:13:15.996 fused_ordering(390) 00:13:15.996 fused_ordering(391) 00:13:15.996 fused_ordering(392) 00:13:15.996 fused_ordering(393) 00:13:15.996 fused_ordering(394) 00:13:15.996 fused_ordering(395) 00:13:15.996 fused_ordering(396) 00:13:15.996 fused_ordering(397) 00:13:15.996 fused_ordering(398) 00:13:15.996 fused_ordering(399) 00:13:15.996 fused_ordering(400) 00:13:15.996 fused_ordering(401) 00:13:15.996 fused_ordering(402) 00:13:15.996 fused_ordering(403) 00:13:15.996 fused_ordering(404) 00:13:15.996 fused_ordering(405) 00:13:15.996 fused_ordering(406) 00:13:15.996 fused_ordering(407) 00:13:15.996 fused_ordering(408) 00:13:15.996 fused_ordering(409) 00:13:15.996 fused_ordering(410) 00:13:16.257 fused_ordering(411) 00:13:16.257 fused_ordering(412) 00:13:16.257 fused_ordering(413) 00:13:16.257 fused_ordering(414) 00:13:16.257 fused_ordering(415) 00:13:16.257 fused_ordering(416) 00:13:16.257 fused_ordering(417) 00:13:16.257 fused_ordering(418) 00:13:16.257 fused_ordering(419) 00:13:16.257 fused_ordering(420) 00:13:16.257 fused_ordering(421) 00:13:16.257 fused_ordering(422) 00:13:16.257 fused_ordering(423) 00:13:16.257 fused_ordering(424) 00:13:16.257 fused_ordering(425) 00:13:16.257 fused_ordering(426) 00:13:16.257 fused_ordering(427) 00:13:16.257 fused_ordering(428) 00:13:16.257 fused_ordering(429) 00:13:16.257 fused_ordering(430) 00:13:16.257 fused_ordering(431) 00:13:16.257 fused_ordering(432) 00:13:16.257 fused_ordering(433) 00:13:16.257 fused_ordering(434) 00:13:16.257 fused_ordering(435) 00:13:16.257 fused_ordering(436) 00:13:16.257 fused_ordering(437) 00:13:16.257 fused_ordering(438) 00:13:16.257 fused_ordering(439) 00:13:16.257 fused_ordering(440) 00:13:16.257 fused_ordering(441) 00:13:16.257 fused_ordering(442) 00:13:16.257 fused_ordering(443) 00:13:16.257 fused_ordering(444) 00:13:16.257 fused_ordering(445) 00:13:16.257 fused_ordering(446) 00:13:16.257 fused_ordering(447) 00:13:16.257 fused_ordering(448) 00:13:16.257 fused_ordering(449) 00:13:16.257 fused_ordering(450) 00:13:16.257 fused_ordering(451) 00:13:16.257 fused_ordering(452) 00:13:16.257 fused_ordering(453) 00:13:16.257 fused_ordering(454) 00:13:16.257 fused_ordering(455) 00:13:16.257 fused_ordering(456) 00:13:16.257 fused_ordering(457) 00:13:16.257 fused_ordering(458) 00:13:16.257 fused_ordering(459) 00:13:16.257 fused_ordering(460) 00:13:16.257 fused_ordering(461) 00:13:16.257 fused_ordering(462) 00:13:16.257 fused_ordering(463) 00:13:16.257 fused_ordering(464) 00:13:16.257 fused_ordering(465) 00:13:16.257 fused_ordering(466) 00:13:16.257 fused_ordering(467) 00:13:16.257 fused_ordering(468) 00:13:16.257 fused_ordering(469) 00:13:16.257 fused_ordering(470) 00:13:16.257 fused_ordering(471) 00:13:16.257 fused_ordering(472) 00:13:16.257 fused_ordering(473) 00:13:16.257 fused_ordering(474) 00:13:16.257 fused_ordering(475) 00:13:16.257 fused_ordering(476) 00:13:16.257 fused_ordering(477) 00:13:16.257 fused_ordering(478) 00:13:16.257 fused_ordering(479) 00:13:16.257 fused_ordering(480) 00:13:16.257 fused_ordering(481) 00:13:16.257 fused_ordering(482) 00:13:16.257 fused_ordering(483) 00:13:16.257 fused_ordering(484) 00:13:16.257 fused_ordering(485) 00:13:16.257 fused_ordering(486) 00:13:16.257 fused_ordering(487) 00:13:16.257 fused_ordering(488) 00:13:16.257 fused_ordering(489) 00:13:16.257 fused_ordering(490) 00:13:16.257 fused_ordering(491) 00:13:16.257 fused_ordering(492) 00:13:16.257 fused_ordering(493) 00:13:16.257 fused_ordering(494) 00:13:16.257 fused_ordering(495) 00:13:16.257 fused_ordering(496) 00:13:16.257 fused_ordering(497) 00:13:16.257 fused_ordering(498) 00:13:16.257 fused_ordering(499) 00:13:16.257 fused_ordering(500) 00:13:16.257 fused_ordering(501) 00:13:16.257 fused_ordering(502) 00:13:16.257 fused_ordering(503) 00:13:16.257 fused_ordering(504) 00:13:16.257 fused_ordering(505) 00:13:16.257 fused_ordering(506) 00:13:16.257 fused_ordering(507) 00:13:16.257 fused_ordering(508) 00:13:16.257 fused_ordering(509) 00:13:16.257 fused_ordering(510) 00:13:16.257 fused_ordering(511) 00:13:16.257 fused_ordering(512) 00:13:16.257 fused_ordering(513) 00:13:16.257 fused_ordering(514) 00:13:16.257 fused_ordering(515) 00:13:16.257 fused_ordering(516) 00:13:16.257 fused_ordering(517) 00:13:16.257 fused_ordering(518) 00:13:16.257 fused_ordering(519) 00:13:16.257 fused_ordering(520) 00:13:16.257 fused_ordering(521) 00:13:16.257 fused_ordering(522) 00:13:16.257 fused_ordering(523) 00:13:16.257 fused_ordering(524) 00:13:16.257 fused_ordering(525) 00:13:16.257 fused_ordering(526) 00:13:16.257 fused_ordering(527) 00:13:16.257 fused_ordering(528) 00:13:16.257 fused_ordering(529) 00:13:16.257 fused_ordering(530) 00:13:16.257 fused_ordering(531) 00:13:16.257 fused_ordering(532) 00:13:16.257 fused_ordering(533) 00:13:16.257 fused_ordering(534) 00:13:16.257 fused_ordering(535) 00:13:16.257 fused_ordering(536) 00:13:16.257 fused_ordering(537) 00:13:16.257 fused_ordering(538) 00:13:16.257 fused_ordering(539) 00:13:16.257 fused_ordering(540) 00:13:16.257 fused_ordering(541) 00:13:16.257 fused_ordering(542) 00:13:16.257 fused_ordering(543) 00:13:16.257 fused_ordering(544) 00:13:16.257 fused_ordering(545) 00:13:16.257 fused_ordering(546) 00:13:16.257 fused_ordering(547) 00:13:16.257 fused_ordering(548) 00:13:16.257 fused_ordering(549) 00:13:16.257 fused_ordering(550) 00:13:16.257 fused_ordering(551) 00:13:16.257 fused_ordering(552) 00:13:16.257 fused_ordering(553) 00:13:16.257 fused_ordering(554) 00:13:16.257 fused_ordering(555) 00:13:16.257 fused_ordering(556) 00:13:16.257 fused_ordering(557) 00:13:16.257 fused_ordering(558) 00:13:16.257 fused_ordering(559) 00:13:16.257 fused_ordering(560) 00:13:16.257 fused_ordering(561) 00:13:16.257 fused_ordering(562) 00:13:16.257 fused_ordering(563) 00:13:16.257 fused_ordering(564) 00:13:16.258 fused_ordering(565) 00:13:16.258 fused_ordering(566) 00:13:16.258 fused_ordering(567) 00:13:16.258 fused_ordering(568) 00:13:16.258 fused_ordering(569) 00:13:16.258 fused_ordering(570) 00:13:16.258 fused_ordering(571) 00:13:16.258 fused_ordering(572) 00:13:16.258 fused_ordering(573) 00:13:16.258 fused_ordering(574) 00:13:16.258 fused_ordering(575) 00:13:16.258 fused_ordering(576) 00:13:16.258 fused_ordering(577) 00:13:16.258 fused_ordering(578) 00:13:16.258 fused_ordering(579) 00:13:16.258 fused_ordering(580) 00:13:16.258 fused_ordering(581) 00:13:16.258 fused_ordering(582) 00:13:16.258 fused_ordering(583) 00:13:16.258 fused_ordering(584) 00:13:16.258 fused_ordering(585) 00:13:16.258 fused_ordering(586) 00:13:16.258 fused_ordering(587) 00:13:16.258 fused_ordering(588) 00:13:16.258 fused_ordering(589) 00:13:16.258 fused_ordering(590) 00:13:16.258 fused_ordering(591) 00:13:16.258 fused_ordering(592) 00:13:16.258 fused_ordering(593) 00:13:16.258 fused_ordering(594) 00:13:16.258 fused_ordering(595) 00:13:16.258 fused_ordering(596) 00:13:16.258 fused_ordering(597) 00:13:16.258 fused_ordering(598) 00:13:16.258 fused_ordering(599) 00:13:16.258 fused_ordering(600) 00:13:16.258 fused_ordering(601) 00:13:16.258 fused_ordering(602) 00:13:16.258 fused_ordering(603) 00:13:16.258 fused_ordering(604) 00:13:16.258 fused_ordering(605) 00:13:16.258 fused_ordering(606) 00:13:16.258 fused_ordering(607) 00:13:16.258 fused_ordering(608) 00:13:16.258 fused_ordering(609) 00:13:16.258 fused_ordering(610) 00:13:16.258 fused_ordering(611) 00:13:16.258 fused_ordering(612) 00:13:16.258 fused_ordering(613) 00:13:16.258 fused_ordering(614) 00:13:16.258 fused_ordering(615) 00:13:16.258 fused_ordering(616) 00:13:16.258 fused_ordering(617) 00:13:16.258 fused_ordering(618) 00:13:16.258 fused_ordering(619) 00:13:16.258 fused_ordering(620) 00:13:16.258 fused_ordering(621) 00:13:16.258 fused_ordering(622) 00:13:16.258 fused_ordering(623) 00:13:16.258 fused_ordering(624) 00:13:16.258 fused_ordering(625) 00:13:16.258 fused_ordering(626) 00:13:16.258 fused_ordering(627) 00:13:16.258 fused_ordering(628) 00:13:16.258 fused_ordering(629) 00:13:16.258 fused_ordering(630) 00:13:16.258 fused_ordering(631) 00:13:16.258 fused_ordering(632) 00:13:16.258 fused_ordering(633) 00:13:16.258 fused_ordering(634) 00:13:16.258 fused_ordering(635) 00:13:16.258 fused_ordering(636) 00:13:16.258 fused_ordering(637) 00:13:16.258 fused_ordering(638) 00:13:16.258 fused_ordering(639) 00:13:16.258 fused_ordering(640) 00:13:16.258 fused_ordering(641) 00:13:16.258 fused_ordering(642) 00:13:16.258 fused_ordering(643) 00:13:16.258 fused_ordering(644) 00:13:16.258 fused_ordering(645) 00:13:16.258 fused_ordering(646) 00:13:16.258 fused_ordering(647) 00:13:16.258 fused_ordering(648) 00:13:16.258 fused_ordering(649) 00:13:16.258 fused_ordering(650) 00:13:16.258 fused_ordering(651) 00:13:16.258 fused_ordering(652) 00:13:16.258 fused_ordering(653) 00:13:16.258 fused_ordering(654) 00:13:16.258 fused_ordering(655) 00:13:16.258 fused_ordering(656) 00:13:16.258 fused_ordering(657) 00:13:16.258 fused_ordering(658) 00:13:16.258 fused_ordering(659) 00:13:16.258 fused_ordering(660) 00:13:16.258 fused_ordering(661) 00:13:16.258 fused_ordering(662) 00:13:16.258 fused_ordering(663) 00:13:16.258 fused_ordering(664) 00:13:16.258 fused_ordering(665) 00:13:16.258 fused_ordering(666) 00:13:16.258 fused_ordering(667) 00:13:16.258 fused_ordering(668) 00:13:16.258 fused_ordering(669) 00:13:16.258 fused_ordering(670) 00:13:16.258 fused_ordering(671) 00:13:16.258 fused_ordering(672) 00:13:16.258 fused_ordering(673) 00:13:16.258 fused_ordering(674) 00:13:16.258 fused_ordering(675) 00:13:16.258 fused_ordering(676) 00:13:16.258 fused_ordering(677) 00:13:16.258 fused_ordering(678) 00:13:16.258 fused_ordering(679) 00:13:16.258 fused_ordering(680) 00:13:16.258 fused_ordering(681) 00:13:16.258 fused_ordering(682) 00:13:16.258 fused_ordering(683) 00:13:16.258 fused_ordering(684) 00:13:16.258 fused_ordering(685) 00:13:16.258 fused_ordering(686) 00:13:16.258 fused_ordering(687) 00:13:16.258 fused_ordering(688) 00:13:16.258 fused_ordering(689) 00:13:16.258 fused_ordering(690) 00:13:16.258 fused_ordering(691) 00:13:16.258 fused_ordering(692) 00:13:16.258 fused_ordering(693) 00:13:16.258 fused_ordering(694) 00:13:16.258 fused_ordering(695) 00:13:16.258 fused_ordering(696) 00:13:16.258 fused_ordering(697) 00:13:16.258 fused_ordering(698) 00:13:16.258 fused_ordering(699) 00:13:16.258 fused_ordering(700) 00:13:16.258 fused_ordering(701) 00:13:16.258 fused_ordering(702) 00:13:16.258 fused_ordering(703) 00:13:16.258 fused_ordering(704) 00:13:16.258 fused_ordering(705) 00:13:16.258 fused_ordering(706) 00:13:16.258 fused_ordering(707) 00:13:16.258 fused_ordering(708) 00:13:16.258 fused_ordering(709) 00:13:16.258 fused_ordering(710) 00:13:16.258 fused_ordering(711) 00:13:16.258 fused_ordering(712) 00:13:16.258 fused_ordering(713) 00:13:16.258 fused_ordering(714) 00:13:16.258 fused_ordering(715) 00:13:16.258 fused_ordering(716) 00:13:16.258 fused_ordering(717) 00:13:16.258 fused_ordering(718) 00:13:16.258 fused_ordering(719) 00:13:16.258 fused_ordering(720) 00:13:16.258 fused_ordering(721) 00:13:16.258 fused_ordering(722) 00:13:16.258 fused_ordering(723) 00:13:16.258 fused_ordering(724) 00:13:16.258 fused_ordering(725) 00:13:16.258 fused_ordering(726) 00:13:16.258 fused_ordering(727) 00:13:16.258 fused_ordering(728) 00:13:16.258 fused_ordering(729) 00:13:16.258 fused_ordering(730) 00:13:16.258 fused_ordering(731) 00:13:16.258 fused_ordering(732) 00:13:16.258 fused_ordering(733) 00:13:16.258 fused_ordering(734) 00:13:16.258 fused_ordering(735) 00:13:16.258 fused_ordering(736) 00:13:16.258 fused_ordering(737) 00:13:16.258 fused_ordering(738) 00:13:16.258 fused_ordering(739) 00:13:16.258 fused_ordering(740) 00:13:16.258 fused_ordering(741) 00:13:16.258 fused_ordering(742) 00:13:16.258 fused_ordering(743) 00:13:16.258 fused_ordering(744) 00:13:16.258 fused_ordering(745) 00:13:16.258 fused_ordering(746) 00:13:16.258 fused_ordering(747) 00:13:16.258 fused_ordering(748) 00:13:16.258 fused_ordering(749) 00:13:16.258 fused_ordering(750) 00:13:16.258 fused_ordering(751) 00:13:16.258 fused_ordering(752) 00:13:16.258 fused_ordering(753) 00:13:16.258 fused_ordering(754) 00:13:16.258 fused_ordering(755) 00:13:16.258 fused_ordering(756) 00:13:16.258 fused_ordering(757) 00:13:16.258 fused_ordering(758) 00:13:16.258 fused_ordering(759) 00:13:16.258 fused_ordering(760) 00:13:16.258 fused_ordering(761) 00:13:16.258 fused_ordering(762) 00:13:16.258 fused_ordering(763) 00:13:16.258 fused_ordering(764) 00:13:16.258 fused_ordering(765) 00:13:16.258 fused_ordering(766) 00:13:16.258 fused_ordering(767) 00:13:16.258 fused_ordering(768) 00:13:16.258 fused_ordering(769) 00:13:16.258 fused_ordering(770) 00:13:16.258 fused_ordering(771) 00:13:16.258 fused_ordering(772) 00:13:16.258 fused_ordering(773) 00:13:16.258 fused_ordering(774) 00:13:16.258 fused_ordering(775) 00:13:16.258 fused_ordering(776) 00:13:16.258 fused_ordering(777) 00:13:16.258 fused_ordering(778) 00:13:16.258 fused_ordering(779) 00:13:16.258 fused_ordering(780) 00:13:16.258 fused_ordering(781) 00:13:16.258 fused_ordering(782) 00:13:16.258 fused_ordering(783) 00:13:16.258 fused_ordering(784) 00:13:16.258 fused_ordering(785) 00:13:16.258 fused_ordering(786) 00:13:16.258 fused_ordering(787) 00:13:16.258 fused_ordering(788) 00:13:16.258 fused_ordering(789) 00:13:16.258 fused_ordering(790) 00:13:16.258 fused_ordering(791) 00:13:16.258 fused_ordering(792) 00:13:16.258 fused_ordering(793) 00:13:16.258 fused_ordering(794) 00:13:16.258 fused_ordering(795) 00:13:16.258 fused_ordering(796) 00:13:16.258 fused_ordering(797) 00:13:16.258 fused_ordering(798) 00:13:16.258 fused_ordering(799) 00:13:16.258 fused_ordering(800) 00:13:16.258 fused_ordering(801) 00:13:16.258 fused_ordering(802) 00:13:16.258 fused_ordering(803) 00:13:16.258 fused_ordering(804) 00:13:16.258 fused_ordering(805) 00:13:16.258 fused_ordering(806) 00:13:16.258 fused_ordering(807) 00:13:16.258 fused_ordering(808) 00:13:16.258 fused_ordering(809) 00:13:16.258 fused_ordering(810) 00:13:16.258 fused_ordering(811) 00:13:16.258 fused_ordering(812) 00:13:16.258 fused_ordering(813) 00:13:16.258 fused_ordering(814) 00:13:16.258 fused_ordering(815) 00:13:16.258 fused_ordering(816) 00:13:16.258 fused_ordering(817) 00:13:16.258 fused_ordering(818) 00:13:16.258 fused_ordering(819) 00:13:16.258 fused_ordering(820) 00:13:16.519 fused_ordering(821) 00:13:16.519 fused_ordering(822) 00:13:16.519 fused_ordering(823) 00:13:16.519 fused_ordering(824) 00:13:16.519 fused_ordering(825) 00:13:16.519 fused_ordering(826) 00:13:16.519 fused_ordering(827) 00:13:16.519 fused_ordering(828) 00:13:16.519 fused_ordering(829) 00:13:16.519 fused_ordering(830) 00:13:16.519 fused_ordering(831) 00:13:16.519 fused_ordering(832) 00:13:16.519 fused_ordering(833) 00:13:16.519 fused_ordering(834) 00:13:16.519 fused_ordering(835) 00:13:16.519 fused_ordering(836) 00:13:16.519 fused_ordering(837) 00:13:16.519 fused_ordering(838) 00:13:16.519 fused_ordering(839) 00:13:16.519 fused_ordering(840) 00:13:16.519 fused_ordering(841) 00:13:16.519 fused_ordering(842) 00:13:16.519 fused_ordering(843) 00:13:16.519 fused_ordering(844) 00:13:16.519 fused_ordering(845) 00:13:16.519 fused_ordering(846) 00:13:16.519 fused_ordering(847) 00:13:16.519 fused_ordering(848) 00:13:16.519 fused_ordering(849) 00:13:16.519 fused_ordering(850) 00:13:16.519 fused_ordering(851) 00:13:16.519 fused_ordering(852) 00:13:16.519 fused_ordering(853) 00:13:16.519 fused_ordering(854) 00:13:16.519 fused_ordering(855) 00:13:16.519 fused_ordering(856) 00:13:16.519 fused_ordering(857) 00:13:16.519 fused_ordering(858) 00:13:16.519 fused_ordering(859) 00:13:16.519 fused_ordering(860) 00:13:16.519 fused_ordering(861) 00:13:16.519 fused_ordering(862) 00:13:16.519 fused_ordering(863) 00:13:16.519 fused_ordering(864) 00:13:16.519 fused_ordering(865) 00:13:16.519 fused_ordering(866) 00:13:16.519 fused_ordering(867) 00:13:16.520 fused_ordering(868) 00:13:16.520 fused_ordering(869) 00:13:16.520 fused_ordering(870) 00:13:16.520 fused_ordering(871) 00:13:16.520 fused_ordering(872) 00:13:16.520 fused_ordering(873) 00:13:16.520 fused_ordering(874) 00:13:16.520 fused_ordering(875) 00:13:16.520 fused_ordering(876) 00:13:16.520 fused_ordering(877) 00:13:16.520 fused_ordering(878) 00:13:16.520 fused_ordering(879) 00:13:16.520 fused_ordering(880) 00:13:16.520 fused_ordering(881) 00:13:16.520 fused_ordering(882) 00:13:16.520 fused_ordering(883) 00:13:16.520 fused_ordering(884) 00:13:16.520 fused_ordering(885) 00:13:16.520 fused_ordering(886) 00:13:16.520 fused_ordering(887) 00:13:16.520 fused_ordering(888) 00:13:16.520 fused_ordering(889) 00:13:16.520 fused_ordering(890) 00:13:16.520 fused_ordering(891) 00:13:16.520 fused_ordering(892) 00:13:16.520 fused_ordering(893) 00:13:16.520 fused_ordering(894) 00:13:16.520 fused_ordering(895) 00:13:16.520 fused_ordering(896) 00:13:16.520 fused_ordering(897) 00:13:16.520 fused_ordering(898) 00:13:16.520 fused_ordering(899) 00:13:16.520 fused_ordering(900) 00:13:16.520 fused_ordering(901) 00:13:16.520 fused_ordering(902) 00:13:16.520 fused_ordering(903) 00:13:16.520 fused_ordering(904) 00:13:16.520 fused_ordering(905) 00:13:16.520 fused_ordering(906) 00:13:16.520 fused_ordering(907) 00:13:16.520 fused_ordering(908) 00:13:16.520 fused_ordering(909) 00:13:16.520 fused_ordering(910) 00:13:16.520 fused_ordering(911) 00:13:16.520 fused_ordering(912) 00:13:16.520 fused_ordering(913) 00:13:16.520 fused_ordering(914) 00:13:16.520 fused_ordering(915) 00:13:16.520 fused_ordering(916) 00:13:16.520 fused_ordering(917) 00:13:16.520 fused_ordering(918) 00:13:16.520 fused_ordering(919) 00:13:16.520 fused_ordering(920) 00:13:16.520 fused_ordering(921) 00:13:16.520 fused_ordering(922) 00:13:16.520 fused_ordering(923) 00:13:16.520 fused_ordering(924) 00:13:16.520 fused_ordering(925) 00:13:16.520 fused_ordering(926) 00:13:16.520 fused_ordering(927) 00:13:16.520 fused_ordering(928) 00:13:16.520 fused_ordering(929) 00:13:16.520 fused_ordering(930) 00:13:16.520 fused_ordering(931) 00:13:16.520 fused_ordering(932) 00:13:16.520 fused_ordering(933) 00:13:16.520 fused_ordering(934) 00:13:16.520 fused_ordering(935) 00:13:16.520 fused_ordering(936) 00:13:16.520 fused_ordering(937) 00:13:16.520 fused_ordering(938) 00:13:16.520 fused_ordering(939) 00:13:16.520 fused_ordering(940) 00:13:16.520 fused_ordering(941) 00:13:16.520 fused_ordering(942) 00:13:16.520 fused_ordering(943) 00:13:16.520 fused_ordering(944) 00:13:16.520 fused_ordering(945) 00:13:16.520 fused_ordering(946) 00:13:16.520 fused_ordering(947) 00:13:16.520 fused_ordering(948) 00:13:16.520 fused_ordering(949) 00:13:16.520 fused_ordering(950) 00:13:16.520 fused_ordering(951) 00:13:16.520 fused_ordering(952) 00:13:16.520 fused_ordering(953) 00:13:16.520 fused_ordering(954) 00:13:16.520 fused_ordering(955) 00:13:16.520 fused_ordering(956) 00:13:16.520 fused_ordering(957) 00:13:16.520 fused_ordering(958) 00:13:16.520 fused_ordering(959) 00:13:16.520 fused_ordering(960) 00:13:16.520 fused_ordering(961) 00:13:16.520 fused_ordering(962) 00:13:16.520 fused_ordering(963) 00:13:16.520 fused_ordering(964) 00:13:16.520 fused_ordering(965) 00:13:16.520 fused_ordering(966) 00:13:16.520 fused_ordering(967) 00:13:16.520 fused_ordering(968) 00:13:16.520 fused_ordering(969) 00:13:16.520 fused_ordering(970) 00:13:16.520 fused_ordering(971) 00:13:16.520 fused_ordering(972) 00:13:16.520 fused_ordering(973) 00:13:16.520 fused_ordering(974) 00:13:16.520 fused_ordering(975) 00:13:16.520 fused_ordering(976) 00:13:16.520 fused_ordering(977) 00:13:16.520 fused_ordering(978) 00:13:16.520 fused_ordering(979) 00:13:16.520 fused_ordering(980) 00:13:16.520 fused_ordering(981) 00:13:16.520 fused_ordering(982) 00:13:16.520 fused_ordering(983) 00:13:16.520 fused_ordering(984) 00:13:16.520 fused_ordering(985) 00:13:16.520 fused_ordering(986) 00:13:16.520 fused_ordering(987) 00:13:16.520 fused_ordering(988) 00:13:16.520 fused_ordering(989) 00:13:16.520 fused_ordering(990) 00:13:16.520 fused_ordering(991) 00:13:16.520 fused_ordering(992) 00:13:16.520 fused_ordering(993) 00:13:16.520 fused_ordering(994) 00:13:16.520 fused_ordering(995) 00:13:16.520 fused_ordering(996) 00:13:16.520 fused_ordering(997) 00:13:16.520 fused_ordering(998) 00:13:16.520 fused_ordering(999) 00:13:16.520 fused_ordering(1000) 00:13:16.520 fused_ordering(1001) 00:13:16.520 fused_ordering(1002) 00:13:16.520 fused_ordering(1003) 00:13:16.520 fused_ordering(1004) 00:13:16.520 fused_ordering(1005) 00:13:16.520 fused_ordering(1006) 00:13:16.520 fused_ordering(1007) 00:13:16.520 fused_ordering(1008) 00:13:16.520 fused_ordering(1009) 00:13:16.520 fused_ordering(1010) 00:13:16.520 fused_ordering(1011) 00:13:16.520 fused_ordering(1012) 00:13:16.520 fused_ordering(1013) 00:13:16.520 fused_ordering(1014) 00:13:16.520 fused_ordering(1015) 00:13:16.520 fused_ordering(1016) 00:13:16.520 fused_ordering(1017) 00:13:16.520 fused_ordering(1018) 00:13:16.520 fused_ordering(1019) 00:13:16.520 fused_ordering(1020) 00:13:16.520 fused_ordering(1021) 00:13:16.520 fused_ordering(1022) 00:13:16.520 fused_ordering(1023) 00:13:16.520 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:16.520 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:16.520 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:16.520 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:16.520 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:16.520 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:16.520 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:16.520 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:16.520 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:16.520 rmmod nvme_rdma 00:13:16.520 rmmod nvme_fabrics 00:13:16.520 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3500007 ']' 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3500007 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3500007 ']' 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3500007 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3500007 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3500007' 00:13:16.782 killing process with pid 3500007 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3500007 00:13:16.782 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3500007 00:13:17.043 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.043 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:17.043 00:13:17.043 real 0m10.815s 00:13:17.043 user 0m5.695s 00:13:17.043 sys 0m6.613s 00:13:17.043 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.043 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:17.044 ************************************ 00:13:17.044 END TEST nvmf_fused_ordering 00:13:17.044 ************************************ 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:17.044 ************************************ 00:13:17.044 START TEST nvmf_ns_masking 00:13:17.044 ************************************ 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:13:17.044 * Looking for test storage... 00:13:17.044 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.044 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:17.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.306 --rc genhtml_branch_coverage=1 00:13:17.306 --rc genhtml_function_coverage=1 00:13:17.306 --rc genhtml_legend=1 00:13:17.306 --rc geninfo_all_blocks=1 00:13:17.306 --rc geninfo_unexecuted_blocks=1 00:13:17.306 00:13:17.306 ' 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:17.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.306 --rc genhtml_branch_coverage=1 00:13:17.306 --rc genhtml_function_coverage=1 00:13:17.306 --rc genhtml_legend=1 00:13:17.306 --rc geninfo_all_blocks=1 00:13:17.306 --rc geninfo_unexecuted_blocks=1 00:13:17.306 00:13:17.306 ' 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:17.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.306 --rc genhtml_branch_coverage=1 00:13:17.306 --rc genhtml_function_coverage=1 00:13:17.306 --rc genhtml_legend=1 00:13:17.306 --rc geninfo_all_blocks=1 00:13:17.306 --rc geninfo_unexecuted_blocks=1 00:13:17.306 00:13:17.306 ' 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:17.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.306 --rc genhtml_branch_coverage=1 00:13:17.306 --rc genhtml_function_coverage=1 00:13:17.306 --rc genhtml_legend=1 00:13:17.306 --rc geninfo_all_blocks=1 00:13:17.306 --rc geninfo_unexecuted_blocks=1 00:13:17.306 00:13:17.306 ' 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.306 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.307 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=77c733e8-c656-471f-afab-e9e040ab1eef 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=693c18b2-eacc-45db-8bbd-ac2b25e71bb6 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6fdd0281-58a7-4cc0-837c-dcf16630e1b1 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:17.307 17:58:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:25.617 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:25.617 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:25.617 Found net devices under 0000:98:00.0: mlx_0_0 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:25.617 Found net devices under 0000:98:00.1: mlx_0_1 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:25.617 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:25.617 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:13:25.617 altname enp152s0f0np0 00:13:25.617 altname ens817f0np0 00:13:25.617 inet 192.168.100.8/24 scope global mlx_0_0 00:13:25.617 valid_lft forever preferred_lft forever 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:25.617 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:25.617 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:13:25.617 altname enp152s0f1np1 00:13:25.617 altname ens817f1np1 00:13:25.617 inet 192.168.100.9/24 scope global mlx_0_1 00:13:25.617 valid_lft forever preferred_lft forever 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:25.617 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:25.618 192.168.100.9' 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:25.618 192.168.100.9' 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:25.618 192.168.100.9' 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3504782 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3504782 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3504782 ']' 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.618 17:58:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:25.618 [2024-11-20 17:58:44.012250] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:25.618 [2024-11-20 17:58:44.012320] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.618 [2024-11-20 17:58:44.098634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.959 [2024-11-20 17:58:44.139514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.959 [2024-11-20 17:58:44.139553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.959 [2024-11-20 17:58:44.139561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.959 [2024-11-20 17:58:44.139567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.959 [2024-11-20 17:58:44.139574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.959 [2024-11-20 17:58:44.140244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.560 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.560 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:26.560 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:26.560 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:26.560 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:26.560 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.560 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:26.560 [2024-11-20 17:58:45.045370] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x667c80/0x66c170) succeed. 00:13:26.560 [2024-11-20 17:58:45.058528] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x669130/0x6ad810) succeed. 00:13:26.820 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:26.820 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:26.820 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:26.820 Malloc1 00:13:26.820 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:27.080 Malloc2 00:13:27.080 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:27.340 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:27.601 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:27.601 [2024-11-20 17:58:46.037115] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:27.601 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:27.601 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6fdd0281-58a7-4cc0-837c-dcf16630e1b1 -a 192.168.100.8 -s 4420 -i 4 00:13:28.171 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.171 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:28.171 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.171 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:28.172 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:30.083 [ 0]:0x1 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:30.083 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.343 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15fcb03fb6e74560a3861edd07ec5c97 00:13:30.343 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15fcb03fb6e74560a3861edd07ec5c97 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.343 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:30.343 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:30.343 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.343 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:30.343 [ 0]:0x1 00:13:30.343 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:30.343 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15fcb03fb6e74560a3861edd07ec5c97 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15fcb03fb6e74560a3861edd07ec5c97 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:30.603 [ 1]:0x2 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd65b7a0980248f1ac23efac6cb3f9da 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd65b7a0980248f1ac23efac6cb3f9da != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:30.603 17:58:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.171 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.171 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:31.430 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:31.430 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6fdd0281-58a7-4cc0-837c-dcf16630e1b1 -a 192.168.100.8 -s 4420 -i 4 00:13:31.691 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:31.691 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:31.691 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.691 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:31.691 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:31.691 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:34.233 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:34.233 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:34.233 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.233 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:34.233 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:34.234 [ 0]:0x2 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd65b7a0980248f1ac23efac6cb3f9da 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd65b7a0980248f1ac23efac6cb3f9da != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:34.234 [ 0]:0x1 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15fcb03fb6e74560a3861edd07ec5c97 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15fcb03fb6e74560a3861edd07ec5c97 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:34.234 [ 1]:0x2 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd65b7a0980248f1ac23efac6cb3f9da 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd65b7a0980248f1ac23efac6cb3f9da != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.234 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:34.495 [ 0]:0x2 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd65b7a0980248f1ac23efac6cb3f9da 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd65b7a0980248f1ac23efac6cb3f9da != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:34.495 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.066 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:35.066 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:35.066 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6fdd0281-58a7-4cc0-837c-dcf16630e1b1 -a 192.168.100.8 -s 4420 -i 4 00:13:35.637 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:35.637 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:35.637 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.637 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:35.637 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:35.637 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:37.550 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:37.550 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:37.550 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.550 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:37.550 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.550 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:37.550 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:37.550 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.811 [ 0]:0x1 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15fcb03fb6e74560a3861edd07ec5c97 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15fcb03fb6e74560a3861edd07ec5c97 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.811 [ 1]:0x2 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd65b7a0980248f1ac23efac6cb3f9da 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd65b7a0980248f1ac23efac6cb3f9da != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.811 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.072 [ 0]:0x2 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd65b7a0980248f1ac23efac6cb3f9da 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd65b7a0980248f1ac23efac6cb3f9da != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:38.072 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:38.334 [2024-11-20 17:58:56.666748] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:38.334 request: 00:13:38.334 { 00:13:38.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.334 "nsid": 2, 00:13:38.334 "host": "nqn.2016-06.io.spdk:host1", 00:13:38.334 "method": "nvmf_ns_remove_host", 00:13:38.334 "req_id": 1 00:13:38.334 } 00:13:38.334 Got JSON-RPC error response 00:13:38.334 response: 00:13:38.334 { 00:13:38.334 "code": -32602, 00:13:38.334 "message": "Invalid parameters" 00:13:38.334 } 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.334 [ 0]:0x2 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd65b7a0980248f1ac23efac6cb3f9da 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd65b7a0980248f1ac23efac6cb3f9da != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:38.334 17:58:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.904 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3507533 00:13:38.904 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.904 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:38.904 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3507533 /var/tmp/host.sock 00:13:38.904 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3507533 ']' 00:13:38.904 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:38.904 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.904 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:38.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:38.904 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.904 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:38.904 [2024-11-20 17:58:57.308676] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:38.904 [2024-11-20 17:58:57.308727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507533 ] 00:13:38.904 [2024-11-20 17:58:57.402971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.162 [2024-11-20 17:58:57.439748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.732 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.732 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:39.732 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.993 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.993 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 77c733e8-c656-471f-afab-e9e040ab1eef 00:13:39.993 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:39.993 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 77C733E8C656471FAFABE9E040AB1EEF -i 00:13:40.253 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 693c18b2-eacc-45db-8bbd-ac2b25e71bb6 00:13:40.253 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:40.253 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 693C18B2EACC45DB8BBDAC2B25E71BB6 -i 00:13:40.253 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:40.513 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:40.773 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:40.773 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:41.034 nvme0n1 00:13:41.034 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:41.034 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:41.294 nvme1n2 00:13:41.294 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:41.294 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:41.294 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:41.295 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:41.295 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:41.295 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:41.295 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:41.295 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:41.295 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:41.556 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 77c733e8-c656-471f-afab-e9e040ab1eef == \7\7\c\7\3\3\e\8\-\c\6\5\6\-\4\7\1\f\-\a\f\a\b\-\e\9\e\0\4\0\a\b\1\e\e\f ]] 00:13:41.556 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:41.556 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:41.556 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:41.816 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 693c18b2-eacc-45db-8bbd-ac2b25e71bb6 == \6\9\3\c\1\8\b\2\-\e\a\c\c\-\4\5\d\b\-\8\b\b\d\-\a\c\2\b\2\5\e\7\1\b\b\6 ]] 00:13:41.816 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 77c733e8-c656-471f-afab-e9e040ab1eef 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 77C733E8C656471FAFABE9E040AB1EEF 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 77C733E8C656471FAFABE9E040AB1EEF 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:42.077 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:42.078 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 77C733E8C656471FAFABE9E040AB1EEF 00:13:42.338 [2024-11-20 17:59:00.681848] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:42.338 [2024-11-20 17:59:00.681887] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:42.338 [2024-11-20 17:59:00.681896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.338 request: 00:13:42.338 { 00:13:42.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.338 "namespace": { 00:13:42.338 "bdev_name": "invalid", 00:13:42.338 "nsid": 1, 00:13:42.338 "nguid": "77C733E8C656471FAFABE9E040AB1EEF", 00:13:42.338 "no_auto_visible": false 00:13:42.338 }, 00:13:42.338 "method": "nvmf_subsystem_add_ns", 00:13:42.338 "req_id": 1 00:13:42.338 } 00:13:42.338 Got JSON-RPC error response 00:13:42.338 response: 00:13:42.338 { 00:13:42.338 "code": -32602, 00:13:42.338 "message": "Invalid parameters" 00:13:42.338 } 00:13:42.338 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:42.338 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.338 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.338 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.338 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 77c733e8-c656-471f-afab-e9e040ab1eef 00:13:42.338 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:42.338 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 77C733E8C656471FAFABE9E040AB1EEF -i 00:13:42.599 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:44.512 17:59:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:44.512 17:59:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:44.512 17:59:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3507533 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3507533 ']' 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3507533 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3507533 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3507533' 00:13:44.774 killing process with pid 3507533 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3507533 00:13:44.774 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3507533 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:45.035 rmmod nvme_rdma 00:13:45.035 rmmod nvme_fabrics 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3504782 ']' 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3504782 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3504782 ']' 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3504782 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.035 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3504782 00:13:45.296 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.296 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.296 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3504782' 00:13:45.296 killing process with pid 3504782 00:13:45.296 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3504782 00:13:45.296 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3504782 00:13:45.296 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:45.296 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:45.296 00:13:45.296 real 0m28.408s 00:13:45.296 user 0m33.875s 00:13:45.296 sys 0m8.331s 00:13:45.296 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.296 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.296 ************************************ 00:13:45.296 END TEST nvmf_ns_masking 00:13:45.296 ************************************ 00:13:45.556 17:59:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:45.556 17:59:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:45.556 17:59:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:45.556 17:59:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.556 17:59:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.556 ************************************ 00:13:45.556 START TEST nvmf_nvme_cli 00:13:45.556 ************************************ 00:13:45.556 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:45.556 * Looking for test storage... 00:13:45.556 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:45.556 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:45.556 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:45.556 17:59:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:45.556 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:45.556 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.556 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.556 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:45.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.557 --rc genhtml_branch_coverage=1 00:13:45.557 --rc genhtml_function_coverage=1 00:13:45.557 --rc genhtml_legend=1 00:13:45.557 --rc geninfo_all_blocks=1 00:13:45.557 --rc geninfo_unexecuted_blocks=1 00:13:45.557 00:13:45.557 ' 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:45.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.557 --rc genhtml_branch_coverage=1 00:13:45.557 --rc genhtml_function_coverage=1 00:13:45.557 --rc genhtml_legend=1 00:13:45.557 --rc geninfo_all_blocks=1 00:13:45.557 --rc geninfo_unexecuted_blocks=1 00:13:45.557 00:13:45.557 ' 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:45.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.557 --rc genhtml_branch_coverage=1 00:13:45.557 --rc genhtml_function_coverage=1 00:13:45.557 --rc genhtml_legend=1 00:13:45.557 --rc geninfo_all_blocks=1 00:13:45.557 --rc geninfo_unexecuted_blocks=1 00:13:45.557 00:13:45.557 ' 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:45.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.557 --rc genhtml_branch_coverage=1 00:13:45.557 --rc genhtml_function_coverage=1 00:13:45.557 --rc genhtml_legend=1 00:13:45.557 --rc geninfo_all_blocks=1 00:13:45.557 --rc geninfo_unexecuted_blocks=1 00:13:45.557 00:13:45.557 ' 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.557 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.818 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:45.818 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:45.819 17:59:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:53.965 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:53.965 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:53.965 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:53.966 Found net devices under 0000:98:00.0: mlx_0_0 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:53.966 Found net devices under 0000:98:00.1: mlx_0_1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:53.966 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:53.966 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:13:53.966 altname enp152s0f0np0 00:13:53.966 altname ens817f0np0 00:13:53.966 inet 192.168.100.8/24 scope global mlx_0_0 00:13:53.966 valid_lft forever preferred_lft forever 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:53.966 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:53.966 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:13:53.966 altname enp152s0f1np1 00:13:53.966 altname ens817f1np1 00:13:53.966 inet 192.168.100.9/24 scope global mlx_0_1 00:13:53.966 valid_lft forever preferred_lft forever 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:53.966 192.168.100.9' 00:13:53.966 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:53.966 192.168.100.9' 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:53.967 192.168.100.9' 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3513259 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3513259 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3513259 ']' 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.967 17:59:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.228 [2024-11-20 17:59:12.525764] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:54.228 [2024-11-20 17:59:12.525814] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.228 [2024-11-20 17:59:12.605780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.228 [2024-11-20 17:59:12.643063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.228 [2024-11-20 17:59:12.643095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.228 [2024-11-20 17:59:12.643104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.228 [2024-11-20 17:59:12.643110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.228 [2024-11-20 17:59:12.643116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.228 [2024-11-20 17:59:12.644899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.228 [2024-11-20 17:59:12.645133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.228 [2024-11-20 17:59:12.645134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.228 [2024-11-20 17:59:12.644969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.800 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.800 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:54.800 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:54.800 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:54.800 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.062 [2024-11-20 17:59:13.407030] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc04f30/0xc09420) succeed. 00:13:55.062 [2024-11-20 17:59:13.421564] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc065c0/0xc4aac0) succeed. 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.062 Malloc0 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.062 Malloc1 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.062 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.325 [2024-11-20 17:59:13.637898] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 4420 00:13:55.325 00:13:55.325 Discovery Log Number of Records 2, Generation counter 2 00:13:55.325 =====Discovery Log Entry 0====== 00:13:55.325 trtype: rdma 00:13:55.325 adrfam: ipv4 00:13:55.325 subtype: current discovery subsystem 00:13:55.325 treq: not required 00:13:55.325 portid: 0 00:13:55.325 trsvcid: 4420 00:13:55.325 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:55.325 traddr: 192.168.100.8 00:13:55.325 eflags: explicit discovery connections, duplicate discovery information 00:13:55.325 rdma_prtype: not specified 00:13:55.325 rdma_qptype: connected 00:13:55.325 rdma_cms: rdma-cm 00:13:55.325 rdma_pkey: 0x0000 00:13:55.325 =====Discovery Log Entry 1====== 00:13:55.325 trtype: rdma 00:13:55.325 adrfam: ipv4 00:13:55.325 subtype: nvme subsystem 00:13:55.325 treq: not required 00:13:55.325 portid: 0 00:13:55.325 trsvcid: 4420 00:13:55.325 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:55.325 traddr: 192.168.100.8 00:13:55.325 eflags: none 00:13:55.325 rdma_prtype: not specified 00:13:55.325 rdma_qptype: connected 00:13:55.325 rdma_cms: rdma-cm 00:13:55.325 rdma_pkey: 0x0000 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:55.325 17:59:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:56.714 17:59:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:56.714 17:59:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:56.714 17:59:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.714 17:59:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:56.714 17:59:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:56.714 17:59:15 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:58.630 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.631 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:58.631 /dev/nvme0n2 ]] 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:58.893 17:59:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.278 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:00.278 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:00.278 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:00.278 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.278 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:00.278 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:00.279 rmmod nvme_rdma 00:14:00.279 rmmod nvme_fabrics 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3513259 ']' 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3513259 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3513259 ']' 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3513259 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3513259 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3513259' 00:14:00.279 killing process with pid 3513259 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3513259 00:14:00.279 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3513259 00:14:00.540 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:00.540 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:00.540 00:14:00.540 real 0m15.029s 00:14:00.540 user 0m27.335s 00:14:00.540 sys 0m6.944s 00:14:00.540 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.540 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:00.540 ************************************ 00:14:00.540 END TEST nvmf_nvme_cli 00:14:00.540 ************************************ 00:14:00.540 17:59:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:14:00.540 17:59:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:00.540 17:59:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:00.540 17:59:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.540 17:59:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.540 ************************************ 00:14:00.540 START TEST nvmf_auth_target 00:14:00.540 ************************************ 00:14:00.540 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:00.540 * Looking for test storage... 00:14:00.540 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:00.540 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:00.540 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:00.540 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:00.805 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:00.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.806 --rc genhtml_branch_coverage=1 00:14:00.806 --rc genhtml_function_coverage=1 00:14:00.806 --rc genhtml_legend=1 00:14:00.806 --rc geninfo_all_blocks=1 00:14:00.806 --rc geninfo_unexecuted_blocks=1 00:14:00.806 00:14:00.806 ' 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:00.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.806 --rc genhtml_branch_coverage=1 00:14:00.806 --rc genhtml_function_coverage=1 00:14:00.806 --rc genhtml_legend=1 00:14:00.806 --rc geninfo_all_blocks=1 00:14:00.806 --rc geninfo_unexecuted_blocks=1 00:14:00.806 00:14:00.806 ' 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:00.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.806 --rc genhtml_branch_coverage=1 00:14:00.806 --rc genhtml_function_coverage=1 00:14:00.806 --rc genhtml_legend=1 00:14:00.806 --rc geninfo_all_blocks=1 00:14:00.806 --rc geninfo_unexecuted_blocks=1 00:14:00.806 00:14:00.806 ' 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:00.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.806 --rc genhtml_branch_coverage=1 00:14:00.806 --rc genhtml_function_coverage=1 00:14:00.806 --rc genhtml_legend=1 00:14:00.806 --rc geninfo_all_blocks=1 00:14:00.806 --rc geninfo_unexecuted_blocks=1 00:14:00.806 00:14:00.806 ' 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:00.806 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:00.806 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:00.807 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.951 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:14:08.952 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:14:08.952 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:14:08.952 Found net devices under 0000:98:00.0: mlx_0_0 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:14:08.952 Found net devices under 0000:98:00.1: mlx_0_1 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:08.952 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:08.952 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:08.952 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:14:08.952 altname enp152s0f0np0 00:14:08.952 altname ens817f0np0 00:14:08.952 inet 192.168.100.8/24 scope global mlx_0_0 00:14:08.952 valid_lft forever preferred_lft forever 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:08.953 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:08.953 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:14:08.953 altname enp152s0f1np1 00:14:08.953 altname ens817f1np1 00:14:08.953 inet 192.168.100.9/24 scope global mlx_0_1 00:14:08.953 valid_lft forever preferred_lft forever 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:08.953 192.168.100.9' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:08.953 192.168.100.9' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:08.953 192.168.100.9' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3518700 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3518700 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3518700 ']' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.953 17:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3519015 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a572ae7109df841ac68707a7cf03452973ae2e926df92540 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nO5 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a572ae7109df841ac68707a7cf03452973ae2e926df92540 0 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a572ae7109df841ac68707a7cf03452973ae2e926df92540 0 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a572ae7109df841ac68707a7cf03452973ae2e926df92540 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nO5 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nO5 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.nO5 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ddc680b687337fa1cea0efd78a3266f480946e9ca915dde6ea6b56f4d6bfd1c9 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CTy 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ddc680b687337fa1cea0efd78a3266f480946e9ca915dde6ea6b56f4d6bfd1c9 3 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ddc680b687337fa1cea0efd78a3266f480946e9ca915dde6ea6b56f4d6bfd1c9 3 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ddc680b687337fa1cea0efd78a3266f480946e9ca915dde6ea6b56f4d6bfd1c9 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:09.895 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:10.156 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CTy 00:14:10.156 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CTy 00:14:10.156 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.CTy 00:14:10.156 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:10.156 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0b1a0f26917dc372482d19ac53221b5c 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NjA 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0b1a0f26917dc372482d19ac53221b5c 1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0b1a0f26917dc372482d19ac53221b5c 1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0b1a0f26917dc372482d19ac53221b5c 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NjA 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NjA 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.NjA 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=30c8877fa16e26af7054453c3610e8c1a0eae7be2e857566 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3xd 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 30c8877fa16e26af7054453c3610e8c1a0eae7be2e857566 2 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 30c8877fa16e26af7054453c3610e8c1a0eae7be2e857566 2 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=30c8877fa16e26af7054453c3610e8c1a0eae7be2e857566 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3xd 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3xd 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.3xd 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b4cd08002e82200d97abbc6b18ef0717213c2994c3a4ae47 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.QsH 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b4cd08002e82200d97abbc6b18ef0717213c2994c3a4ae47 2 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b4cd08002e82200d97abbc6b18ef0717213c2994c3a4ae47 2 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b4cd08002e82200d97abbc6b18ef0717213c2994c3a4ae47 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.QsH 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.QsH 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.QsH 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cb240b064f205dd1a9fdddfb410a67bb 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ua1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cb240b064f205dd1a9fdddfb410a67bb 1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cb240b064f205dd1a9fdddfb410a67bb 1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cb240b064f205dd1a9fdddfb410a67bb 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:10.157 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ua1 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ua1 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ua1 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=087d53abc348fe40c57c8a5c2e167b04d405690be74810bb7f3aab6c779772a9 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9kW 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 087d53abc348fe40c57c8a5c2e167b04d405690be74810bb7f3aab6c779772a9 3 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 087d53abc348fe40c57c8a5c2e167b04d405690be74810bb7f3aab6c779772a9 3 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=087d53abc348fe40c57c8a5c2e167b04d405690be74810bb7f3aab6c779772a9 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9kW 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9kW 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.9kW 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3518700 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3518700 ']' 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3519015 /var/tmp/host.sock 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3519015 ']' 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:10.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.418 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.679 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.679 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:10.679 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:10.679 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.679 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.679 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.679 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:10.679 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nO5 00:14:10.679 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.679 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.938 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.938 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.nO5 00:14:10.938 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.nO5 00:14:10.938 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.CTy ]] 00:14:10.938 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CTy 00:14:10.938 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.938 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.938 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.938 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CTy 00:14:10.938 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CTy 00:14:11.198 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:11.198 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NjA 00:14:11.198 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.198 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.198 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.198 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.NjA 00:14:11.198 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.NjA 00:14:11.198 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.3xd ]] 00:14:11.198 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3xd 00:14:11.198 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.199 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.459 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.459 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3xd 00:14:11.459 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3xd 00:14:11.459 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:11.459 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.QsH 00:14:11.459 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.459 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.459 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.459 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.QsH 00:14:11.459 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.QsH 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ua1 ]] 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ua1 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ua1 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ua1 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.9kW 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.9kW 00:14:11.720 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.9kW 00:14:11.981 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:11.981 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:11.981 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.981 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.981 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:11.981 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.242 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.243 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.503 00:14:12.503 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.503 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.503 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.503 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.503 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.503 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.503 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.503 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.503 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.503 { 00:14:12.503 "cntlid": 1, 00:14:12.503 "qid": 0, 00:14:12.503 "state": "enabled", 00:14:12.503 "thread": "nvmf_tgt_poll_group_000", 00:14:12.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:12.503 "listen_address": { 00:14:12.503 "trtype": "RDMA", 00:14:12.503 "adrfam": "IPv4", 00:14:12.503 "traddr": "192.168.100.8", 00:14:12.503 "trsvcid": "4420" 00:14:12.503 }, 00:14:12.503 "peer_address": { 00:14:12.503 "trtype": "RDMA", 00:14:12.503 "adrfam": "IPv4", 00:14:12.503 "traddr": "192.168.100.8", 00:14:12.503 "trsvcid": "43878" 00:14:12.503 }, 00:14:12.503 "auth": { 00:14:12.503 "state": "completed", 00:14:12.503 "digest": "sha256", 00:14:12.503 "dhgroup": "null" 00:14:12.503 } 00:14:12.503 } 00:14:12.503 ]' 00:14:12.503 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.763 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.763 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.763 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:12.763 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.763 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.763 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.763 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.023 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:13.023 17:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:13.964 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.964 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.964 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.964 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.964 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.964 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.964 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:13.964 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.225 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.225 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.485 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.485 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.485 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.485 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.485 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.485 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.485 { 00:14:14.485 "cntlid": 3, 00:14:14.486 "qid": 0, 00:14:14.486 "state": "enabled", 00:14:14.486 "thread": "nvmf_tgt_poll_group_000", 00:14:14.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:14.486 "listen_address": { 00:14:14.486 "trtype": "RDMA", 00:14:14.486 "adrfam": "IPv4", 00:14:14.486 "traddr": "192.168.100.8", 00:14:14.486 "trsvcid": "4420" 00:14:14.486 }, 00:14:14.486 "peer_address": { 00:14:14.486 "trtype": "RDMA", 00:14:14.486 "adrfam": "IPv4", 00:14:14.486 "traddr": "192.168.100.8", 00:14:14.486 "trsvcid": "49159" 00:14:14.486 }, 00:14:14.486 "auth": { 00:14:14.486 "state": "completed", 00:14:14.486 "digest": "sha256", 00:14:14.486 "dhgroup": "null" 00:14:14.486 } 00:14:14.486 } 00:14:14.486 ]' 00:14:14.486 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.486 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.486 17:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.746 17:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:14.746 17:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.746 17:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.746 17:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.746 17:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.746 17:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:14.746 17:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:15.687 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.949 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.209 00:14:16.210 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.210 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.210 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.470 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.470 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.470 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.470 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.470 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.470 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.470 { 00:14:16.470 "cntlid": 5, 00:14:16.470 "qid": 0, 00:14:16.470 "state": "enabled", 00:14:16.470 "thread": "nvmf_tgt_poll_group_000", 00:14:16.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:16.471 "listen_address": { 00:14:16.471 "trtype": "RDMA", 00:14:16.471 "adrfam": "IPv4", 00:14:16.471 "traddr": "192.168.100.8", 00:14:16.471 "trsvcid": "4420" 00:14:16.471 }, 00:14:16.471 "peer_address": { 00:14:16.471 "trtype": "RDMA", 00:14:16.471 "adrfam": "IPv4", 00:14:16.471 "traddr": "192.168.100.8", 00:14:16.471 "trsvcid": "36258" 00:14:16.471 }, 00:14:16.471 "auth": { 00:14:16.471 "state": "completed", 00:14:16.471 "digest": "sha256", 00:14:16.471 "dhgroup": "null" 00:14:16.471 } 00:14:16.471 } 00:14:16.471 ]' 00:14:16.471 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.471 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.471 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.471 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:16.471 17:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.471 17:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.471 17:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.471 17:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.731 17:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:16.731 17:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:17.673 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.935 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:17.935 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.936 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.196 00:14:18.196 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.196 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.196 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.456 { 00:14:18.456 "cntlid": 7, 00:14:18.456 "qid": 0, 00:14:18.456 "state": "enabled", 00:14:18.456 "thread": "nvmf_tgt_poll_group_000", 00:14:18.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:18.456 "listen_address": { 00:14:18.456 "trtype": "RDMA", 00:14:18.456 "adrfam": "IPv4", 00:14:18.456 "traddr": "192.168.100.8", 00:14:18.456 "trsvcid": "4420" 00:14:18.456 }, 00:14:18.456 "peer_address": { 00:14:18.456 "trtype": "RDMA", 00:14:18.456 "adrfam": "IPv4", 00:14:18.456 "traddr": "192.168.100.8", 00:14:18.456 "trsvcid": "38868" 00:14:18.456 }, 00:14:18.456 "auth": { 00:14:18.456 "state": "completed", 00:14:18.456 "digest": "sha256", 00:14:18.456 "dhgroup": "null" 00:14:18.456 } 00:14:18.456 } 00:14:18.456 ]' 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:18.456 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.716 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.717 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.717 17:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.717 17:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:18.717 17:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:19.658 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.919 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.920 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.920 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.920 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.920 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.920 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.920 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.180 00:14:20.180 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.180 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.180 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.440 { 00:14:20.440 "cntlid": 9, 00:14:20.440 "qid": 0, 00:14:20.440 "state": "enabled", 00:14:20.440 "thread": "nvmf_tgt_poll_group_000", 00:14:20.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:20.440 "listen_address": { 00:14:20.440 "trtype": "RDMA", 00:14:20.440 "adrfam": "IPv4", 00:14:20.440 "traddr": "192.168.100.8", 00:14:20.440 "trsvcid": "4420" 00:14:20.440 }, 00:14:20.440 "peer_address": { 00:14:20.440 "trtype": "RDMA", 00:14:20.440 "adrfam": "IPv4", 00:14:20.440 "traddr": "192.168.100.8", 00:14:20.440 "trsvcid": "51243" 00:14:20.440 }, 00:14:20.440 "auth": { 00:14:20.440 "state": "completed", 00:14:20.440 "digest": "sha256", 00:14:20.440 "dhgroup": "ffdhe2048" 00:14:20.440 } 00:14:20.440 } 00:14:20.440 ]' 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.440 17:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.701 17:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:20.701 17:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:21.643 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.643 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:21.643 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.643 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.904 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.165 00:14:22.165 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.165 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.165 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.426 { 00:14:22.426 "cntlid": 11, 00:14:22.426 "qid": 0, 00:14:22.426 "state": "enabled", 00:14:22.426 "thread": "nvmf_tgt_poll_group_000", 00:14:22.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:22.426 "listen_address": { 00:14:22.426 "trtype": "RDMA", 00:14:22.426 "adrfam": "IPv4", 00:14:22.426 "traddr": "192.168.100.8", 00:14:22.426 "trsvcid": "4420" 00:14:22.426 }, 00:14:22.426 "peer_address": { 00:14:22.426 "trtype": "RDMA", 00:14:22.426 "adrfam": "IPv4", 00:14:22.426 "traddr": "192.168.100.8", 00:14:22.426 "trsvcid": "50102" 00:14:22.426 }, 00:14:22.426 "auth": { 00:14:22.426 "state": "completed", 00:14:22.426 "digest": "sha256", 00:14:22.426 "dhgroup": "ffdhe2048" 00:14:22.426 } 00:14:22.426 } 00:14:22.426 ]' 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.426 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.427 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.427 17:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.688 17:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:22.688 17:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:23.631 17:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.631 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:23.631 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.631 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.631 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.631 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.631 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.631 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.892 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.154 00:14:24.154 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.154 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.154 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.415 { 00:14:24.415 "cntlid": 13, 00:14:24.415 "qid": 0, 00:14:24.415 "state": "enabled", 00:14:24.415 "thread": "nvmf_tgt_poll_group_000", 00:14:24.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:24.415 "listen_address": { 00:14:24.415 "trtype": "RDMA", 00:14:24.415 "adrfam": "IPv4", 00:14:24.415 "traddr": "192.168.100.8", 00:14:24.415 "trsvcid": "4420" 00:14:24.415 }, 00:14:24.415 "peer_address": { 00:14:24.415 "trtype": "RDMA", 00:14:24.415 "adrfam": "IPv4", 00:14:24.415 "traddr": "192.168.100.8", 00:14:24.415 "trsvcid": "38697" 00:14:24.415 }, 00:14:24.415 "auth": { 00:14:24.415 "state": "completed", 00:14:24.415 "digest": "sha256", 00:14:24.415 "dhgroup": "ffdhe2048" 00:14:24.415 } 00:14:24.415 } 00:14:24.415 ]' 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.415 17:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.676 17:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:24.676 17:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:25.619 17:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.619 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:25.619 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.619 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.619 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.619 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.619 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.619 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:25.880 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.141 00:14:26.141 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.141 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.141 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.402 { 00:14:26.402 "cntlid": 15, 00:14:26.402 "qid": 0, 00:14:26.402 "state": "enabled", 00:14:26.402 "thread": "nvmf_tgt_poll_group_000", 00:14:26.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:26.402 "listen_address": { 00:14:26.402 "trtype": "RDMA", 00:14:26.402 "adrfam": "IPv4", 00:14:26.402 "traddr": "192.168.100.8", 00:14:26.402 "trsvcid": "4420" 00:14:26.402 }, 00:14:26.402 "peer_address": { 00:14:26.402 "trtype": "RDMA", 00:14:26.402 "adrfam": "IPv4", 00:14:26.402 "traddr": "192.168.100.8", 00:14:26.402 "trsvcid": "33786" 00:14:26.402 }, 00:14:26.402 "auth": { 00:14:26.402 "state": "completed", 00:14:26.402 "digest": "sha256", 00:14:26.402 "dhgroup": "ffdhe2048" 00:14:26.402 } 00:14:26.402 } 00:14:26.402 ]' 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.402 17:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.663 17:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:26.663 17:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:27.606 17:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.606 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:27.606 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.606 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.606 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.867 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.129 00:14:28.129 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.129 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.129 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.391 { 00:14:28.391 "cntlid": 17, 00:14:28.391 "qid": 0, 00:14:28.391 "state": "enabled", 00:14:28.391 "thread": "nvmf_tgt_poll_group_000", 00:14:28.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:28.391 "listen_address": { 00:14:28.391 "trtype": "RDMA", 00:14:28.391 "adrfam": "IPv4", 00:14:28.391 "traddr": "192.168.100.8", 00:14:28.391 "trsvcid": "4420" 00:14:28.391 }, 00:14:28.391 "peer_address": { 00:14:28.391 "trtype": "RDMA", 00:14:28.391 "adrfam": "IPv4", 00:14:28.391 "traddr": "192.168.100.8", 00:14:28.391 "trsvcid": "43351" 00:14:28.391 }, 00:14:28.391 "auth": { 00:14:28.391 "state": "completed", 00:14:28.391 "digest": "sha256", 00:14:28.391 "dhgroup": "ffdhe3072" 00:14:28.391 } 00:14:28.391 } 00:14:28.391 ]' 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.391 17:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.653 17:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:28.653 17:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:29.597 17:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.597 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:29.597 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.597 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.597 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.597 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.597 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.597 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.858 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.859 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.120 00:14:30.120 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.120 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.120 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.380 { 00:14:30.380 "cntlid": 19, 00:14:30.380 "qid": 0, 00:14:30.380 "state": "enabled", 00:14:30.380 "thread": "nvmf_tgt_poll_group_000", 00:14:30.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:30.380 "listen_address": { 00:14:30.380 "trtype": "RDMA", 00:14:30.380 "adrfam": "IPv4", 00:14:30.380 "traddr": "192.168.100.8", 00:14:30.380 "trsvcid": "4420" 00:14:30.380 }, 00:14:30.380 "peer_address": { 00:14:30.380 "trtype": "RDMA", 00:14:30.380 "adrfam": "IPv4", 00:14:30.380 "traddr": "192.168.100.8", 00:14:30.380 "trsvcid": "57944" 00:14:30.380 }, 00:14:30.380 "auth": { 00:14:30.380 "state": "completed", 00:14:30.380 "digest": "sha256", 00:14:30.380 "dhgroup": "ffdhe3072" 00:14:30.380 } 00:14:30.380 } 00:14:30.380 ]' 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.380 17:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.642 17:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:30.642 17:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:31.585 17:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.585 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:31.585 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.585 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.585 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.585 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.585 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.585 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.845 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.106 00:14:32.106 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.106 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.106 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.367 { 00:14:32.367 "cntlid": 21, 00:14:32.367 "qid": 0, 00:14:32.367 "state": "enabled", 00:14:32.367 "thread": "nvmf_tgt_poll_group_000", 00:14:32.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:32.367 "listen_address": { 00:14:32.367 "trtype": "RDMA", 00:14:32.367 "adrfam": "IPv4", 00:14:32.367 "traddr": "192.168.100.8", 00:14:32.367 "trsvcid": "4420" 00:14:32.367 }, 00:14:32.367 "peer_address": { 00:14:32.367 "trtype": "RDMA", 00:14:32.367 "adrfam": "IPv4", 00:14:32.367 "traddr": "192.168.100.8", 00:14:32.367 "trsvcid": "52539" 00:14:32.367 }, 00:14:32.367 "auth": { 00:14:32.367 "state": "completed", 00:14:32.367 "digest": "sha256", 00:14:32.367 "dhgroup": "ffdhe3072" 00:14:32.367 } 00:14:32.367 } 00:14:32.367 ]' 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.367 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.627 17:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:32.627 17:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:33.568 17:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.568 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:33.568 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.568 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.568 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.568 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.568 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.568 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.828 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:33.828 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.828 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:33.828 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:33.828 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:33.829 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.829 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:14:33.829 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.829 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.829 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.829 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:33.829 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.829 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.089 00:14:34.089 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.089 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.089 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.349 { 00:14:34.349 "cntlid": 23, 00:14:34.349 "qid": 0, 00:14:34.349 "state": "enabled", 00:14:34.349 "thread": "nvmf_tgt_poll_group_000", 00:14:34.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:34.349 "listen_address": { 00:14:34.349 "trtype": "RDMA", 00:14:34.349 "adrfam": "IPv4", 00:14:34.349 "traddr": "192.168.100.8", 00:14:34.349 "trsvcid": "4420" 00:14:34.349 }, 00:14:34.349 "peer_address": { 00:14:34.349 "trtype": "RDMA", 00:14:34.349 "adrfam": "IPv4", 00:14:34.349 "traddr": "192.168.100.8", 00:14:34.349 "trsvcid": "45643" 00:14:34.349 }, 00:14:34.349 "auth": { 00:14:34.349 "state": "completed", 00:14:34.349 "digest": "sha256", 00:14:34.349 "dhgroup": "ffdhe3072" 00:14:34.349 } 00:14:34.349 } 00:14:34.349 ]' 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.349 17:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.610 17:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:34.610 17:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:35.550 17:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.550 17:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:35.550 17:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.550 17:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.550 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.550 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.550 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.550 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:35.550 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.811 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.072 00:14:36.072 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.072 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.073 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.334 { 00:14:36.334 "cntlid": 25, 00:14:36.334 "qid": 0, 00:14:36.334 "state": "enabled", 00:14:36.334 "thread": "nvmf_tgt_poll_group_000", 00:14:36.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:36.334 "listen_address": { 00:14:36.334 "trtype": "RDMA", 00:14:36.334 "adrfam": "IPv4", 00:14:36.334 "traddr": "192.168.100.8", 00:14:36.334 "trsvcid": "4420" 00:14:36.334 }, 00:14:36.334 "peer_address": { 00:14:36.334 "trtype": "RDMA", 00:14:36.334 "adrfam": "IPv4", 00:14:36.334 "traddr": "192.168.100.8", 00:14:36.334 "trsvcid": "36297" 00:14:36.334 }, 00:14:36.334 "auth": { 00:14:36.334 "state": "completed", 00:14:36.334 "digest": "sha256", 00:14:36.334 "dhgroup": "ffdhe4096" 00:14:36.334 } 00:14:36.334 } 00:14:36.334 ]' 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.334 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.596 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:36.596 17:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:37.652 17:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.652 17:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:37.652 17:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.652 17:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.652 17:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.652 17:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.652 17:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.652 17:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.652 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.001 00:14:38.001 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.001 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.001 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.262 { 00:14:38.262 "cntlid": 27, 00:14:38.262 "qid": 0, 00:14:38.262 "state": "enabled", 00:14:38.262 "thread": "nvmf_tgt_poll_group_000", 00:14:38.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:38.262 "listen_address": { 00:14:38.262 "trtype": "RDMA", 00:14:38.262 "adrfam": "IPv4", 00:14:38.262 "traddr": "192.168.100.8", 00:14:38.262 "trsvcid": "4420" 00:14:38.262 }, 00:14:38.262 "peer_address": { 00:14:38.262 "trtype": "RDMA", 00:14:38.262 "adrfam": "IPv4", 00:14:38.262 "traddr": "192.168.100.8", 00:14:38.262 "trsvcid": "55686" 00:14:38.262 }, 00:14:38.262 "auth": { 00:14:38.262 "state": "completed", 00:14:38.262 "digest": "sha256", 00:14:38.262 "dhgroup": "ffdhe4096" 00:14:38.262 } 00:14:38.262 } 00:14:38.262 ]' 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.262 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.523 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:38.523 17:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:39.464 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.464 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:39.464 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.464 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.464 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.464 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.464 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.464 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.725 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.985 00:14:39.985 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.985 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.985 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.245 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.246 { 00:14:40.246 "cntlid": 29, 00:14:40.246 "qid": 0, 00:14:40.246 "state": "enabled", 00:14:40.246 "thread": "nvmf_tgt_poll_group_000", 00:14:40.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:40.246 "listen_address": { 00:14:40.246 "trtype": "RDMA", 00:14:40.246 "adrfam": "IPv4", 00:14:40.246 "traddr": "192.168.100.8", 00:14:40.246 "trsvcid": "4420" 00:14:40.246 }, 00:14:40.246 "peer_address": { 00:14:40.246 "trtype": "RDMA", 00:14:40.246 "adrfam": "IPv4", 00:14:40.246 "traddr": "192.168.100.8", 00:14:40.246 "trsvcid": "55846" 00:14:40.246 }, 00:14:40.246 "auth": { 00:14:40.246 "state": "completed", 00:14:40.246 "digest": "sha256", 00:14:40.246 "dhgroup": "ffdhe4096" 00:14:40.246 } 00:14:40.246 } 00:14:40.246 ]' 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.246 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.506 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:40.507 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:41.451 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.451 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:41.451 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.451 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.451 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.451 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.451 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.451 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.711 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:41.711 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.711 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.711 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:41.711 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:41.711 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.711 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:14:41.712 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.712 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.712 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.712 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:41.712 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.712 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.973 00:14:41.973 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.973 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.973 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.234 { 00:14:42.234 "cntlid": 31, 00:14:42.234 "qid": 0, 00:14:42.234 "state": "enabled", 00:14:42.234 "thread": "nvmf_tgt_poll_group_000", 00:14:42.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:42.234 "listen_address": { 00:14:42.234 "trtype": "RDMA", 00:14:42.234 "adrfam": "IPv4", 00:14:42.234 "traddr": "192.168.100.8", 00:14:42.234 "trsvcid": "4420" 00:14:42.234 }, 00:14:42.234 "peer_address": { 00:14:42.234 "trtype": "RDMA", 00:14:42.234 "adrfam": "IPv4", 00:14:42.234 "traddr": "192.168.100.8", 00:14:42.234 "trsvcid": "35524" 00:14:42.234 }, 00:14:42.234 "auth": { 00:14:42.234 "state": "completed", 00:14:42.234 "digest": "sha256", 00:14:42.234 "dhgroup": "ffdhe4096" 00:14:42.234 } 00:14:42.234 } 00:14:42.234 ]' 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.234 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.494 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:42.495 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:43.436 18:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.436 18:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:43.436 18:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.436 18:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.436 18:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.436 18:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.436 18:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.436 18:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:43.436 18:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.697 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.958 00:14:43.958 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.958 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.958 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.221 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.221 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.221 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.221 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.221 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.221 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.221 { 00:14:44.221 "cntlid": 33, 00:14:44.221 "qid": 0, 00:14:44.221 "state": "enabled", 00:14:44.221 "thread": "nvmf_tgt_poll_group_000", 00:14:44.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:44.221 "listen_address": { 00:14:44.221 "trtype": "RDMA", 00:14:44.221 "adrfam": "IPv4", 00:14:44.221 "traddr": "192.168.100.8", 00:14:44.221 "trsvcid": "4420" 00:14:44.221 }, 00:14:44.221 "peer_address": { 00:14:44.221 "trtype": "RDMA", 00:14:44.221 "adrfam": "IPv4", 00:14:44.221 "traddr": "192.168.100.8", 00:14:44.221 "trsvcid": "48492" 00:14:44.221 }, 00:14:44.221 "auth": { 00:14:44.221 "state": "completed", 00:14:44.221 "digest": "sha256", 00:14:44.221 "dhgroup": "ffdhe6144" 00:14:44.221 } 00:14:44.221 } 00:14:44.221 ]' 00:14:44.221 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.221 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.221 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.221 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:44.482 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.482 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.482 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.482 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.482 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:44.482 18:00:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:45.425 18:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.685 18:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:45.685 18:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.685 18:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.685 18:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.685 18:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.685 18:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:45.685 18:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.685 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.257 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.257 { 00:14:46.257 "cntlid": 35, 00:14:46.257 "qid": 0, 00:14:46.257 "state": "enabled", 00:14:46.257 "thread": "nvmf_tgt_poll_group_000", 00:14:46.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:46.257 "listen_address": { 00:14:46.257 "trtype": "RDMA", 00:14:46.257 "adrfam": "IPv4", 00:14:46.257 "traddr": "192.168.100.8", 00:14:46.257 "trsvcid": "4420" 00:14:46.257 }, 00:14:46.257 "peer_address": { 00:14:46.257 "trtype": "RDMA", 00:14:46.257 "adrfam": "IPv4", 00:14:46.257 "traddr": "192.168.100.8", 00:14:46.257 "trsvcid": "43291" 00:14:46.257 }, 00:14:46.257 "auth": { 00:14:46.257 "state": "completed", 00:14:46.257 "digest": "sha256", 00:14:46.257 "dhgroup": "ffdhe6144" 00:14:46.257 } 00:14:46.257 } 00:14:46.257 ]' 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.257 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.517 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:46.517 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.517 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.517 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.517 18:00:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.517 18:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:46.517 18:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:47.459 18:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.721 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.293 00:14:48.293 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.293 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.293 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.293 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.293 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.293 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.293 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.293 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.293 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.293 { 00:14:48.293 "cntlid": 37, 00:14:48.293 "qid": 0, 00:14:48.293 "state": "enabled", 00:14:48.293 "thread": "nvmf_tgt_poll_group_000", 00:14:48.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:48.293 "listen_address": { 00:14:48.293 "trtype": "RDMA", 00:14:48.293 "adrfam": "IPv4", 00:14:48.293 "traddr": "192.168.100.8", 00:14:48.293 "trsvcid": "4420" 00:14:48.293 }, 00:14:48.293 "peer_address": { 00:14:48.293 "trtype": "RDMA", 00:14:48.293 "adrfam": "IPv4", 00:14:48.293 "traddr": "192.168.100.8", 00:14:48.293 "trsvcid": "39395" 00:14:48.293 }, 00:14:48.293 "auth": { 00:14:48.293 "state": "completed", 00:14:48.293 "digest": "sha256", 00:14:48.293 "dhgroup": "ffdhe6144" 00:14:48.293 } 00:14:48.293 } 00:14:48.293 ]' 00:14:48.293 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.554 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.554 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.554 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:48.554 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.554 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.554 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.554 18:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.815 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:48.815 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:49.758 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.758 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:49.758 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.758 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.758 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.758 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.758 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.758 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.020 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.280 00:14:50.280 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.280 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.280 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.541 { 00:14:50.541 "cntlid": 39, 00:14:50.541 "qid": 0, 00:14:50.541 "state": "enabled", 00:14:50.541 "thread": "nvmf_tgt_poll_group_000", 00:14:50.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:50.541 "listen_address": { 00:14:50.541 "trtype": "RDMA", 00:14:50.541 "adrfam": "IPv4", 00:14:50.541 "traddr": "192.168.100.8", 00:14:50.541 "trsvcid": "4420" 00:14:50.541 }, 00:14:50.541 "peer_address": { 00:14:50.541 "trtype": "RDMA", 00:14:50.541 "adrfam": "IPv4", 00:14:50.541 "traddr": "192.168.100.8", 00:14:50.541 "trsvcid": "36722" 00:14:50.541 }, 00:14:50.541 "auth": { 00:14:50.541 "state": "completed", 00:14:50.541 "digest": "sha256", 00:14:50.541 "dhgroup": "ffdhe6144" 00:14:50.541 } 00:14:50.541 } 00:14:50.541 ]' 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:50.541 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.541 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.541 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.541 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.815 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:50.815 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:51.758 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.758 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:51.758 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.758 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.758 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.758 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.758 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.758 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:51.758 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.019 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:52.019 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.019 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.019 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:52.019 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.019 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.020 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.020 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.020 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.020 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.020 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.020 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.020 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.593 00:14:52.593 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.593 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.593 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.593 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.593 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.593 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.593 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.593 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.593 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.593 { 00:14:52.593 "cntlid": 41, 00:14:52.593 "qid": 0, 00:14:52.593 "state": "enabled", 00:14:52.593 "thread": "nvmf_tgt_poll_group_000", 00:14:52.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:52.593 "listen_address": { 00:14:52.593 "trtype": "RDMA", 00:14:52.593 "adrfam": "IPv4", 00:14:52.593 "traddr": "192.168.100.8", 00:14:52.593 "trsvcid": "4420" 00:14:52.593 }, 00:14:52.593 "peer_address": { 00:14:52.593 "trtype": "RDMA", 00:14:52.593 "adrfam": "IPv4", 00:14:52.593 "traddr": "192.168.100.8", 00:14:52.593 "trsvcid": "53945" 00:14:52.593 }, 00:14:52.593 "auth": { 00:14:52.593 "state": "completed", 00:14:52.593 "digest": "sha256", 00:14:52.593 "dhgroup": "ffdhe8192" 00:14:52.593 } 00:14:52.593 } 00:14:52.593 ]' 00:14:52.593 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.593 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.593 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.855 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:52.855 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.855 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.855 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.855 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.117 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:53.117 18:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:14:53.690 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.951 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.523 00:14:54.523 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.523 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.523 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.822 { 00:14:54.822 "cntlid": 43, 00:14:54.822 "qid": 0, 00:14:54.822 "state": "enabled", 00:14:54.822 "thread": "nvmf_tgt_poll_group_000", 00:14:54.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:54.822 "listen_address": { 00:14:54.822 "trtype": "RDMA", 00:14:54.822 "adrfam": "IPv4", 00:14:54.822 "traddr": "192.168.100.8", 00:14:54.822 "trsvcid": "4420" 00:14:54.822 }, 00:14:54.822 "peer_address": { 00:14:54.822 "trtype": "RDMA", 00:14:54.822 "adrfam": "IPv4", 00:14:54.822 "traddr": "192.168.100.8", 00:14:54.822 "trsvcid": "46441" 00:14:54.822 }, 00:14:54.822 "auth": { 00:14:54.822 "state": "completed", 00:14:54.822 "digest": "sha256", 00:14:54.822 "dhgroup": "ffdhe8192" 00:14:54.822 } 00:14:54.822 } 00:14:54.822 ]' 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.822 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.082 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:55.082 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:14:56.021 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.022 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:56.022 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.022 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.282 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.854 00:14:56.854 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.854 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.854 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.116 { 00:14:57.116 "cntlid": 45, 00:14:57.116 "qid": 0, 00:14:57.116 "state": "enabled", 00:14:57.116 "thread": "nvmf_tgt_poll_group_000", 00:14:57.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:57.116 "listen_address": { 00:14:57.116 "trtype": "RDMA", 00:14:57.116 "adrfam": "IPv4", 00:14:57.116 "traddr": "192.168.100.8", 00:14:57.116 "trsvcid": "4420" 00:14:57.116 }, 00:14:57.116 "peer_address": { 00:14:57.116 "trtype": "RDMA", 00:14:57.116 "adrfam": "IPv4", 00:14:57.116 "traddr": "192.168.100.8", 00:14:57.116 "trsvcid": "45414" 00:14:57.116 }, 00:14:57.116 "auth": { 00:14:57.116 "state": "completed", 00:14:57.116 "digest": "sha256", 00:14:57.116 "dhgroup": "ffdhe8192" 00:14:57.116 } 00:14:57.116 } 00:14:57.116 ]' 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.116 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.376 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:57.376 18:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:14:58.316 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.316 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:58.316 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.316 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.316 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.316 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.316 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.316 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.577 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.146 00:14:59.146 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.146 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.146 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.408 { 00:14:59.408 "cntlid": 47, 00:14:59.408 "qid": 0, 00:14:59.408 "state": "enabled", 00:14:59.408 "thread": "nvmf_tgt_poll_group_000", 00:14:59.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:14:59.408 "listen_address": { 00:14:59.408 "trtype": "RDMA", 00:14:59.408 "adrfam": "IPv4", 00:14:59.408 "traddr": "192.168.100.8", 00:14:59.408 "trsvcid": "4420" 00:14:59.408 }, 00:14:59.408 "peer_address": { 00:14:59.408 "trtype": "RDMA", 00:14:59.408 "adrfam": "IPv4", 00:14:59.408 "traddr": "192.168.100.8", 00:14:59.408 "trsvcid": "57782" 00:14:59.408 }, 00:14:59.408 "auth": { 00:14:59.408 "state": "completed", 00:14:59.408 "digest": "sha256", 00:14:59.408 "dhgroup": "ffdhe8192" 00:14:59.408 } 00:14:59.408 } 00:14:59.408 ]' 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.408 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.668 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:14:59.668 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:00.608 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.608 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:00.608 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.608 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.608 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.608 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:00.608 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.608 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.608 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.608 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.868 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.128 00:15:01.128 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.128 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.128 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.128 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.128 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.128 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.128 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.128 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.128 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.128 { 00:15:01.128 "cntlid": 49, 00:15:01.128 "qid": 0, 00:15:01.128 "state": "enabled", 00:15:01.128 "thread": "nvmf_tgt_poll_group_000", 00:15:01.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:01.128 "listen_address": { 00:15:01.128 "trtype": "RDMA", 00:15:01.128 "adrfam": "IPv4", 00:15:01.128 "traddr": "192.168.100.8", 00:15:01.128 "trsvcid": "4420" 00:15:01.128 }, 00:15:01.128 "peer_address": { 00:15:01.128 "trtype": "RDMA", 00:15:01.128 "adrfam": "IPv4", 00:15:01.128 "traddr": "192.168.100.8", 00:15:01.128 "trsvcid": "34184" 00:15:01.128 }, 00:15:01.128 "auth": { 00:15:01.128 "state": "completed", 00:15:01.128 "digest": "sha384", 00:15:01.128 "dhgroup": "null" 00:15:01.128 } 00:15:01.128 } 00:15:01.128 ]' 00:15:01.128 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.389 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.389 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.389 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:01.389 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.389 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.389 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.389 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.650 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:01.650 18:00:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:02.595 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.595 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:02.595 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.595 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.595 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.595 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.595 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:02.595 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.595 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.856 00:15:02.856 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.856 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.857 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.118 { 00:15:03.118 "cntlid": 51, 00:15:03.118 "qid": 0, 00:15:03.118 "state": "enabled", 00:15:03.118 "thread": "nvmf_tgt_poll_group_000", 00:15:03.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:03.118 "listen_address": { 00:15:03.118 "trtype": "RDMA", 00:15:03.118 "adrfam": "IPv4", 00:15:03.118 "traddr": "192.168.100.8", 00:15:03.118 "trsvcid": "4420" 00:15:03.118 }, 00:15:03.118 "peer_address": { 00:15:03.118 "trtype": "RDMA", 00:15:03.118 "adrfam": "IPv4", 00:15:03.118 "traddr": "192.168.100.8", 00:15:03.118 "trsvcid": "56014" 00:15:03.118 }, 00:15:03.118 "auth": { 00:15:03.118 "state": "completed", 00:15:03.118 "digest": "sha384", 00:15:03.118 "dhgroup": "null" 00:15:03.118 } 00:15:03.118 } 00:15:03.118 ]' 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:03.118 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.379 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.379 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.379 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.379 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:03.379 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:04.323 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.323 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:04.323 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.323 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.584 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.584 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.584 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:04.584 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.584 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.846 00:15:04.846 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.846 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.846 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.107 { 00:15:05.107 "cntlid": 53, 00:15:05.107 "qid": 0, 00:15:05.107 "state": "enabled", 00:15:05.107 "thread": "nvmf_tgt_poll_group_000", 00:15:05.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:05.107 "listen_address": { 00:15:05.107 "trtype": "RDMA", 00:15:05.107 "adrfam": "IPv4", 00:15:05.107 "traddr": "192.168.100.8", 00:15:05.107 "trsvcid": "4420" 00:15:05.107 }, 00:15:05.107 "peer_address": { 00:15:05.107 "trtype": "RDMA", 00:15:05.107 "adrfam": "IPv4", 00:15:05.107 "traddr": "192.168.100.8", 00:15:05.107 "trsvcid": "32966" 00:15:05.107 }, 00:15:05.107 "auth": { 00:15:05.107 "state": "completed", 00:15:05.107 "digest": "sha384", 00:15:05.107 "dhgroup": "null" 00:15:05.107 } 00:15:05.107 } 00:15:05.107 ]' 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.107 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.368 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:05.368 18:00:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:06.312 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.312 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:06.312 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.312 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.312 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.312 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.312 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.312 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.573 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.834 00:15:06.834 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.834 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.834 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.095 { 00:15:07.095 "cntlid": 55, 00:15:07.095 "qid": 0, 00:15:07.095 "state": "enabled", 00:15:07.095 "thread": "nvmf_tgt_poll_group_000", 00:15:07.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:07.095 "listen_address": { 00:15:07.095 "trtype": "RDMA", 00:15:07.095 "adrfam": "IPv4", 00:15:07.095 "traddr": "192.168.100.8", 00:15:07.095 "trsvcid": "4420" 00:15:07.095 }, 00:15:07.095 "peer_address": { 00:15:07.095 "trtype": "RDMA", 00:15:07.095 "adrfam": "IPv4", 00:15:07.095 "traddr": "192.168.100.8", 00:15:07.095 "trsvcid": "46216" 00:15:07.095 }, 00:15:07.095 "auth": { 00:15:07.095 "state": "completed", 00:15:07.095 "digest": "sha384", 00:15:07.095 "dhgroup": "null" 00:15:07.095 } 00:15:07.095 } 00:15:07.095 ]' 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.095 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.355 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:07.355 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:08.298 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.298 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:08.298 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.298 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.298 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.298 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.298 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.298 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.298 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.560 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.822 00:15:08.822 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.822 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.822 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.083 { 00:15:09.083 "cntlid": 57, 00:15:09.083 "qid": 0, 00:15:09.083 "state": "enabled", 00:15:09.083 "thread": "nvmf_tgt_poll_group_000", 00:15:09.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:09.083 "listen_address": { 00:15:09.083 "trtype": "RDMA", 00:15:09.083 "adrfam": "IPv4", 00:15:09.083 "traddr": "192.168.100.8", 00:15:09.083 "trsvcid": "4420" 00:15:09.083 }, 00:15:09.083 "peer_address": { 00:15:09.083 "trtype": "RDMA", 00:15:09.083 "adrfam": "IPv4", 00:15:09.083 "traddr": "192.168.100.8", 00:15:09.083 "trsvcid": "40315" 00:15:09.083 }, 00:15:09.083 "auth": { 00:15:09.083 "state": "completed", 00:15:09.083 "digest": "sha384", 00:15:09.083 "dhgroup": "ffdhe2048" 00:15:09.083 } 00:15:09.083 } 00:15:09.083 ]' 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.083 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.345 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:09.345 18:00:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:10.370 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.370 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:10.370 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.370 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.370 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.370 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.370 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.370 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.633 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.894 00:15:10.894 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.894 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.894 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.894 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.894 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.894 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.894 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.894 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.894 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.894 { 00:15:10.894 "cntlid": 59, 00:15:10.894 "qid": 0, 00:15:10.894 "state": "enabled", 00:15:10.894 "thread": "nvmf_tgt_poll_group_000", 00:15:10.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:10.894 "listen_address": { 00:15:10.894 "trtype": "RDMA", 00:15:10.894 "adrfam": "IPv4", 00:15:10.894 "traddr": "192.168.100.8", 00:15:10.894 "trsvcid": "4420" 00:15:10.894 }, 00:15:10.894 "peer_address": { 00:15:10.894 "trtype": "RDMA", 00:15:10.894 "adrfam": "IPv4", 00:15:10.894 "traddr": "192.168.100.8", 00:15:10.894 "trsvcid": "49180" 00:15:10.894 }, 00:15:10.894 "auth": { 00:15:10.894 "state": "completed", 00:15:10.894 "digest": "sha384", 00:15:10.894 "dhgroup": "ffdhe2048" 00:15:10.894 } 00:15:10.894 } 00:15:10.894 ]' 00:15:10.894 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.155 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.155 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.155 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.155 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.155 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.155 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.155 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.416 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:11.416 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:12.358 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.358 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:12.358 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.358 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.358 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.358 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.358 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.358 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.620 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.881 00:15:12.881 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.881 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.881 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.881 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.881 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.881 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.881 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.881 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.881 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.881 { 00:15:12.881 "cntlid": 61, 00:15:12.881 "qid": 0, 00:15:12.881 "state": "enabled", 00:15:12.881 "thread": "nvmf_tgt_poll_group_000", 00:15:12.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:12.881 "listen_address": { 00:15:12.881 "trtype": "RDMA", 00:15:12.881 "adrfam": "IPv4", 00:15:12.881 "traddr": "192.168.100.8", 00:15:12.881 "trsvcid": "4420" 00:15:12.881 }, 00:15:12.881 "peer_address": { 00:15:12.881 "trtype": "RDMA", 00:15:12.881 "adrfam": "IPv4", 00:15:12.881 "traddr": "192.168.100.8", 00:15:12.881 "trsvcid": "51954" 00:15:12.881 }, 00:15:12.881 "auth": { 00:15:12.881 "state": "completed", 00:15:12.881 "digest": "sha384", 00:15:12.881 "dhgroup": "ffdhe2048" 00:15:12.881 } 00:15:12.881 } 00:15:12.881 ]' 00:15:12.881 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.141 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.141 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.141 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.141 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.141 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.141 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.141 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.402 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:13.402 18:00:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:14.345 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.345 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:14.345 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.345 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.345 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.345 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.345 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:14.345 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.606 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.606 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.868 { 00:15:14.868 "cntlid": 63, 00:15:14.868 "qid": 0, 00:15:14.868 "state": "enabled", 00:15:14.868 "thread": "nvmf_tgt_poll_group_000", 00:15:14.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:14.868 "listen_address": { 00:15:14.868 "trtype": "RDMA", 00:15:14.868 "adrfam": "IPv4", 00:15:14.868 "traddr": "192.168.100.8", 00:15:14.868 "trsvcid": "4420" 00:15:14.868 }, 00:15:14.868 "peer_address": { 00:15:14.868 "trtype": "RDMA", 00:15:14.868 "adrfam": "IPv4", 00:15:14.868 "traddr": "192.168.100.8", 00:15:14.868 "trsvcid": "33532" 00:15:14.868 }, 00:15:14.868 "auth": { 00:15:14.868 "state": "completed", 00:15:14.868 "digest": "sha384", 00:15:14.868 "dhgroup": "ffdhe2048" 00:15:14.868 } 00:15:14.868 } 00:15:14.868 ]' 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.868 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.130 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.130 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.130 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.130 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.130 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.391 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:15.391 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:15.962 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.225 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.485 00:15:16.746 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.747 { 00:15:16.747 "cntlid": 65, 00:15:16.747 "qid": 0, 00:15:16.747 "state": "enabled", 00:15:16.747 "thread": "nvmf_tgt_poll_group_000", 00:15:16.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:16.747 "listen_address": { 00:15:16.747 "trtype": "RDMA", 00:15:16.747 "adrfam": "IPv4", 00:15:16.747 "traddr": "192.168.100.8", 00:15:16.747 "trsvcid": "4420" 00:15:16.747 }, 00:15:16.747 "peer_address": { 00:15:16.747 "trtype": "RDMA", 00:15:16.747 "adrfam": "IPv4", 00:15:16.747 "traddr": "192.168.100.8", 00:15:16.747 "trsvcid": "47549" 00:15:16.747 }, 00:15:16.747 "auth": { 00:15:16.747 "state": "completed", 00:15:16.747 "digest": "sha384", 00:15:16.747 "dhgroup": "ffdhe3072" 00:15:16.747 } 00:15:16.747 } 00:15:16.747 ]' 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.747 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.007 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.007 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.007 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.007 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.007 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:17.008 18:00:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:17.948 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.948 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:17.948 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.948 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.948 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.948 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.948 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.948 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.208 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.468 00:15:18.468 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.468 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.468 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.729 { 00:15:18.729 "cntlid": 67, 00:15:18.729 "qid": 0, 00:15:18.729 "state": "enabled", 00:15:18.729 "thread": "nvmf_tgt_poll_group_000", 00:15:18.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:18.729 "listen_address": { 00:15:18.729 "trtype": "RDMA", 00:15:18.729 "adrfam": "IPv4", 00:15:18.729 "traddr": "192.168.100.8", 00:15:18.729 "trsvcid": "4420" 00:15:18.729 }, 00:15:18.729 "peer_address": { 00:15:18.729 "trtype": "RDMA", 00:15:18.729 "adrfam": "IPv4", 00:15:18.729 "traddr": "192.168.100.8", 00:15:18.729 "trsvcid": "37955" 00:15:18.729 }, 00:15:18.729 "auth": { 00:15:18.729 "state": "completed", 00:15:18.729 "digest": "sha384", 00:15:18.729 "dhgroup": "ffdhe3072" 00:15:18.729 } 00:15:18.729 } 00:15:18.729 ]' 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.729 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.990 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.990 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.990 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:18.990 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:19.931 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.931 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:19.931 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.931 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.192 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.453 00:15:20.453 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.453 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.453 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.714 { 00:15:20.714 "cntlid": 69, 00:15:20.714 "qid": 0, 00:15:20.714 "state": "enabled", 00:15:20.714 "thread": "nvmf_tgt_poll_group_000", 00:15:20.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:20.714 "listen_address": { 00:15:20.714 "trtype": "RDMA", 00:15:20.714 "adrfam": "IPv4", 00:15:20.714 "traddr": "192.168.100.8", 00:15:20.714 "trsvcid": "4420" 00:15:20.714 }, 00:15:20.714 "peer_address": { 00:15:20.714 "trtype": "RDMA", 00:15:20.714 "adrfam": "IPv4", 00:15:20.714 "traddr": "192.168.100.8", 00:15:20.714 "trsvcid": "43303" 00:15:20.714 }, 00:15:20.714 "auth": { 00:15:20.714 "state": "completed", 00:15:20.714 "digest": "sha384", 00:15:20.714 "dhgroup": "ffdhe3072" 00:15:20.714 } 00:15:20.714 } 00:15:20.714 ]' 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.714 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.974 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:20.974 18:00:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:21.916 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.916 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.916 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.916 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.916 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.916 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.916 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.916 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.176 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.436 00:15:22.436 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.436 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.437 18:00:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.697 { 00:15:22.697 "cntlid": 71, 00:15:22.697 "qid": 0, 00:15:22.697 "state": "enabled", 00:15:22.697 "thread": "nvmf_tgt_poll_group_000", 00:15:22.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:22.697 "listen_address": { 00:15:22.697 "trtype": "RDMA", 00:15:22.697 "adrfam": "IPv4", 00:15:22.697 "traddr": "192.168.100.8", 00:15:22.697 "trsvcid": "4420" 00:15:22.697 }, 00:15:22.697 "peer_address": { 00:15:22.697 "trtype": "RDMA", 00:15:22.697 "adrfam": "IPv4", 00:15:22.697 "traddr": "192.168.100.8", 00:15:22.697 "trsvcid": "60922" 00:15:22.697 }, 00:15:22.697 "auth": { 00:15:22.697 "state": "completed", 00:15:22.697 "digest": "sha384", 00:15:22.697 "dhgroup": "ffdhe3072" 00:15:22.697 } 00:15:22.697 } 00:15:22.697 ]' 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.697 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.959 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:22.959 18:00:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:23.900 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.900 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:23.900 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.900 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.900 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.900 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.900 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.900 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.900 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.161 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.421 00:15:24.421 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.421 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.421 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.681 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.682 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.682 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.682 18:00:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.682 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.682 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.682 { 00:15:24.682 "cntlid": 73, 00:15:24.682 "qid": 0, 00:15:24.682 "state": "enabled", 00:15:24.682 "thread": "nvmf_tgt_poll_group_000", 00:15:24.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:24.682 "listen_address": { 00:15:24.682 "trtype": "RDMA", 00:15:24.682 "adrfam": "IPv4", 00:15:24.682 "traddr": "192.168.100.8", 00:15:24.682 "trsvcid": "4420" 00:15:24.682 }, 00:15:24.682 "peer_address": { 00:15:24.682 "trtype": "RDMA", 00:15:24.682 "adrfam": "IPv4", 00:15:24.682 "traddr": "192.168.100.8", 00:15:24.682 "trsvcid": "33463" 00:15:24.682 }, 00:15:24.682 "auth": { 00:15:24.682 "state": "completed", 00:15:24.682 "digest": "sha384", 00:15:24.682 "dhgroup": "ffdhe4096" 00:15:24.682 } 00:15:24.682 } 00:15:24.682 ]' 00:15:24.682 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.682 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.682 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.682 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.682 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.682 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.682 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.682 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.942 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:24.942 18:00:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:25.885 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.885 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:25.885 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.886 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.886 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.886 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.886 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.886 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.145 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.146 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.405 00:15:26.405 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.405 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.405 18:00:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.665 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.665 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.665 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.665 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.665 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.665 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.665 { 00:15:26.665 "cntlid": 75, 00:15:26.665 "qid": 0, 00:15:26.665 "state": "enabled", 00:15:26.665 "thread": "nvmf_tgt_poll_group_000", 00:15:26.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:26.665 "listen_address": { 00:15:26.665 "trtype": "RDMA", 00:15:26.665 "adrfam": "IPv4", 00:15:26.665 "traddr": "192.168.100.8", 00:15:26.665 "trsvcid": "4420" 00:15:26.665 }, 00:15:26.665 "peer_address": { 00:15:26.665 "trtype": "RDMA", 00:15:26.665 "adrfam": "IPv4", 00:15:26.665 "traddr": "192.168.100.8", 00:15:26.665 "trsvcid": "59793" 00:15:26.665 }, 00:15:26.665 "auth": { 00:15:26.665 "state": "completed", 00:15:26.665 "digest": "sha384", 00:15:26.665 "dhgroup": "ffdhe4096" 00:15:26.665 } 00:15:26.665 } 00:15:26.665 ]' 00:15:26.665 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.666 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.666 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.666 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.666 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.666 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.666 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.666 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.927 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:26.927 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:27.868 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.868 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.868 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.868 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.868 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.868 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.868 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.868 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.131 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.392 00:15:28.392 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.392 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.392 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.653 { 00:15:28.653 "cntlid": 77, 00:15:28.653 "qid": 0, 00:15:28.653 "state": "enabled", 00:15:28.653 "thread": "nvmf_tgt_poll_group_000", 00:15:28.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:28.653 "listen_address": { 00:15:28.653 "trtype": "RDMA", 00:15:28.653 "adrfam": "IPv4", 00:15:28.653 "traddr": "192.168.100.8", 00:15:28.653 "trsvcid": "4420" 00:15:28.653 }, 00:15:28.653 "peer_address": { 00:15:28.653 "trtype": "RDMA", 00:15:28.653 "adrfam": "IPv4", 00:15:28.653 "traddr": "192.168.100.8", 00:15:28.653 "trsvcid": "54062" 00:15:28.653 }, 00:15:28.653 "auth": { 00:15:28.653 "state": "completed", 00:15:28.653 "digest": "sha384", 00:15:28.653 "dhgroup": "ffdhe4096" 00:15:28.653 } 00:15:28.653 } 00:15:28.653 ]' 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.653 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.926 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.926 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.926 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.926 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:28.926 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:29.872 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.872 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.872 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.872 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.872 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.872 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.872 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.872 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.133 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.394 00:15:30.394 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.394 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.394 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.654 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.654 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.654 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.654 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.654 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.654 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.654 { 00:15:30.654 "cntlid": 79, 00:15:30.654 "qid": 0, 00:15:30.654 "state": "enabled", 00:15:30.654 "thread": "nvmf_tgt_poll_group_000", 00:15:30.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:30.654 "listen_address": { 00:15:30.654 "trtype": "RDMA", 00:15:30.654 "adrfam": "IPv4", 00:15:30.654 "traddr": "192.168.100.8", 00:15:30.654 "trsvcid": "4420" 00:15:30.654 }, 00:15:30.654 "peer_address": { 00:15:30.654 "trtype": "RDMA", 00:15:30.654 "adrfam": "IPv4", 00:15:30.654 "traddr": "192.168.100.8", 00:15:30.654 "trsvcid": "56701" 00:15:30.654 }, 00:15:30.654 "auth": { 00:15:30.654 "state": "completed", 00:15:30.654 "digest": "sha384", 00:15:30.654 "dhgroup": "ffdhe4096" 00:15:30.654 } 00:15:30.654 } 00:15:30.654 ]' 00:15:30.654 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.654 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.654 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.654 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.654 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.654 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.654 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.654 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.914 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:30.914 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:31.856 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.856 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:31.856 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.856 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.856 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.856 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.856 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.856 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.856 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.118 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.379 00:15:32.639 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.639 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.639 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.639 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.639 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.639 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.639 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.640 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.640 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.640 { 00:15:32.640 "cntlid": 81, 00:15:32.640 "qid": 0, 00:15:32.640 "state": "enabled", 00:15:32.640 "thread": "nvmf_tgt_poll_group_000", 00:15:32.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:32.640 "listen_address": { 00:15:32.640 "trtype": "RDMA", 00:15:32.640 "adrfam": "IPv4", 00:15:32.640 "traddr": "192.168.100.8", 00:15:32.640 "trsvcid": "4420" 00:15:32.640 }, 00:15:32.640 "peer_address": { 00:15:32.640 "trtype": "RDMA", 00:15:32.640 "adrfam": "IPv4", 00:15:32.640 "traddr": "192.168.100.8", 00:15:32.640 "trsvcid": "36326" 00:15:32.640 }, 00:15:32.640 "auth": { 00:15:32.640 "state": "completed", 00:15:32.640 "digest": "sha384", 00:15:32.640 "dhgroup": "ffdhe6144" 00:15:32.640 } 00:15:32.640 } 00:15:32.640 ]' 00:15:32.640 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.640 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.640 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.901 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.901 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.901 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.901 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.901 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.901 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:32.901 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:33.844 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.106 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.678 00:15:34.678 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.678 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.678 18:00:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.678 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.678 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.678 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.678 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.678 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.678 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.678 { 00:15:34.678 "cntlid": 83, 00:15:34.678 "qid": 0, 00:15:34.678 "state": "enabled", 00:15:34.678 "thread": "nvmf_tgt_poll_group_000", 00:15:34.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:34.678 "listen_address": { 00:15:34.678 "trtype": "RDMA", 00:15:34.678 "adrfam": "IPv4", 00:15:34.678 "traddr": "192.168.100.8", 00:15:34.678 "trsvcid": "4420" 00:15:34.678 }, 00:15:34.678 "peer_address": { 00:15:34.678 "trtype": "RDMA", 00:15:34.678 "adrfam": "IPv4", 00:15:34.678 "traddr": "192.168.100.8", 00:15:34.678 "trsvcid": "51221" 00:15:34.678 }, 00:15:34.678 "auth": { 00:15:34.678 "state": "completed", 00:15:34.678 "digest": "sha384", 00:15:34.678 "dhgroup": "ffdhe6144" 00:15:34.678 } 00:15:34.678 } 00:15:34.678 ]' 00:15:34.678 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.678 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.678 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.939 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.939 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.939 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.939 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.939 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.200 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:35.200 18:00:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:36.143 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.143 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:36.143 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.143 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.143 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.143 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.143 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.143 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.404 18:00:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.665 00:15:36.665 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.665 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.665 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.925 { 00:15:36.925 "cntlid": 85, 00:15:36.925 "qid": 0, 00:15:36.925 "state": "enabled", 00:15:36.925 "thread": "nvmf_tgt_poll_group_000", 00:15:36.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:36.925 "listen_address": { 00:15:36.925 "trtype": "RDMA", 00:15:36.925 "adrfam": "IPv4", 00:15:36.925 "traddr": "192.168.100.8", 00:15:36.925 "trsvcid": "4420" 00:15:36.925 }, 00:15:36.925 "peer_address": { 00:15:36.925 "trtype": "RDMA", 00:15:36.925 "adrfam": "IPv4", 00:15:36.925 "traddr": "192.168.100.8", 00:15:36.925 "trsvcid": "40016" 00:15:36.925 }, 00:15:36.925 "auth": { 00:15:36.925 "state": "completed", 00:15:36.925 "digest": "sha384", 00:15:36.925 "dhgroup": "ffdhe6144" 00:15:36.925 } 00:15:36.925 } 00:15:36.925 ]' 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.925 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.186 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:37.187 18:00:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:38.141 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.141 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:38.141 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.141 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.141 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.141 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.141 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.141 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.403 18:00:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.664 00:15:38.664 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.664 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.664 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.925 { 00:15:38.925 "cntlid": 87, 00:15:38.925 "qid": 0, 00:15:38.925 "state": "enabled", 00:15:38.925 "thread": "nvmf_tgt_poll_group_000", 00:15:38.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:38.925 "listen_address": { 00:15:38.925 "trtype": "RDMA", 00:15:38.925 "adrfam": "IPv4", 00:15:38.925 "traddr": "192.168.100.8", 00:15:38.925 "trsvcid": "4420" 00:15:38.925 }, 00:15:38.925 "peer_address": { 00:15:38.925 "trtype": "RDMA", 00:15:38.925 "adrfam": "IPv4", 00:15:38.925 "traddr": "192.168.100.8", 00:15:38.925 "trsvcid": "54384" 00:15:38.925 }, 00:15:38.925 "auth": { 00:15:38.925 "state": "completed", 00:15:38.925 "digest": "sha384", 00:15:38.925 "dhgroup": "ffdhe6144" 00:15:38.925 } 00:15:38.925 } 00:15:38.925 ]' 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.925 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.186 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.186 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.186 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.186 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:39.186 18:00:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:40.131 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.131 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:40.131 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.131 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.131 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.131 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.131 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.131 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.131 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.392 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.393 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.963 00:15:40.963 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.963 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.963 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.224 { 00:15:41.224 "cntlid": 89, 00:15:41.224 "qid": 0, 00:15:41.224 "state": "enabled", 00:15:41.224 "thread": "nvmf_tgt_poll_group_000", 00:15:41.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:41.224 "listen_address": { 00:15:41.224 "trtype": "RDMA", 00:15:41.224 "adrfam": "IPv4", 00:15:41.224 "traddr": "192.168.100.8", 00:15:41.224 "trsvcid": "4420" 00:15:41.224 }, 00:15:41.224 "peer_address": { 00:15:41.224 "trtype": "RDMA", 00:15:41.224 "adrfam": "IPv4", 00:15:41.224 "traddr": "192.168.100.8", 00:15:41.224 "trsvcid": "48925" 00:15:41.224 }, 00:15:41.224 "auth": { 00:15:41.224 "state": "completed", 00:15:41.224 "digest": "sha384", 00:15:41.224 "dhgroup": "ffdhe8192" 00:15:41.224 } 00:15:41.224 } 00:15:41.224 ]' 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.224 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.485 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:41.485 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:42.430 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.430 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:42.430 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.430 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.430 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.430 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.430 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.430 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.692 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.268 00:15:43.268 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.268 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.268 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.268 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.268 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.268 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.268 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.268 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.268 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.268 { 00:15:43.268 "cntlid": 91, 00:15:43.268 "qid": 0, 00:15:43.268 "state": "enabled", 00:15:43.268 "thread": "nvmf_tgt_poll_group_000", 00:15:43.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:43.268 "listen_address": { 00:15:43.268 "trtype": "RDMA", 00:15:43.268 "adrfam": "IPv4", 00:15:43.268 "traddr": "192.168.100.8", 00:15:43.268 "trsvcid": "4420" 00:15:43.268 }, 00:15:43.268 "peer_address": { 00:15:43.268 "trtype": "RDMA", 00:15:43.268 "adrfam": "IPv4", 00:15:43.268 "traddr": "192.168.100.8", 00:15:43.268 "trsvcid": "60437" 00:15:43.268 }, 00:15:43.268 "auth": { 00:15:43.268 "state": "completed", 00:15:43.268 "digest": "sha384", 00:15:43.268 "dhgroup": "ffdhe8192" 00:15:43.268 } 00:15:43.268 } 00:15:43.268 ]' 00:15:43.268 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.529 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.529 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.529 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:43.529 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.529 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.529 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.530 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.791 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:43.791 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:44.736 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.736 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:44.736 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.736 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.736 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.736 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.736 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.736 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.996 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.569 00:15:45.569 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.569 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.569 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.569 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.569 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.569 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.569 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.569 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.569 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.569 { 00:15:45.569 "cntlid": 93, 00:15:45.569 "qid": 0, 00:15:45.569 "state": "enabled", 00:15:45.569 "thread": "nvmf_tgt_poll_group_000", 00:15:45.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:45.569 "listen_address": { 00:15:45.569 "trtype": "RDMA", 00:15:45.569 "adrfam": "IPv4", 00:15:45.569 "traddr": "192.168.100.8", 00:15:45.569 "trsvcid": "4420" 00:15:45.569 }, 00:15:45.569 "peer_address": { 00:15:45.569 "trtype": "RDMA", 00:15:45.569 "adrfam": "IPv4", 00:15:45.569 "traddr": "192.168.100.8", 00:15:45.569 "trsvcid": "35350" 00:15:45.569 }, 00:15:45.569 "auth": { 00:15:45.569 "state": "completed", 00:15:45.569 "digest": "sha384", 00:15:45.569 "dhgroup": "ffdhe8192" 00:15:45.569 } 00:15:45.569 } 00:15:45.569 ]' 00:15:45.569 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.569 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.829 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.829 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.829 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.829 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.829 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.829 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.090 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:46.090 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:47.034 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.034 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:47.034 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.034 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.034 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.034 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.034 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:47.034 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.295 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.869 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.869 { 00:15:47.869 "cntlid": 95, 00:15:47.869 "qid": 0, 00:15:47.869 "state": "enabled", 00:15:47.869 "thread": "nvmf_tgt_poll_group_000", 00:15:47.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:47.869 "listen_address": { 00:15:47.869 "trtype": "RDMA", 00:15:47.869 "adrfam": "IPv4", 00:15:47.869 "traddr": "192.168.100.8", 00:15:47.869 "trsvcid": "4420" 00:15:47.869 }, 00:15:47.869 "peer_address": { 00:15:47.869 "trtype": "RDMA", 00:15:47.869 "adrfam": "IPv4", 00:15:47.869 "traddr": "192.168.100.8", 00:15:47.869 "trsvcid": "35781" 00:15:47.869 }, 00:15:47.869 "auth": { 00:15:47.869 "state": "completed", 00:15:47.869 "digest": "sha384", 00:15:47.869 "dhgroup": "ffdhe8192" 00:15:47.869 } 00:15:47.869 } 00:15:47.869 ]' 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.869 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.130 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.131 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.131 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.131 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.131 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.131 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:48.131 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:49.074 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:49.342 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.343 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.343 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.343 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.343 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.614 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.614 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.614 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.614 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.614 00:15:49.614 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.614 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.614 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.875 { 00:15:49.875 "cntlid": 97, 00:15:49.875 "qid": 0, 00:15:49.875 "state": "enabled", 00:15:49.875 "thread": "nvmf_tgt_poll_group_000", 00:15:49.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:49.875 "listen_address": { 00:15:49.875 "trtype": "RDMA", 00:15:49.875 "adrfam": "IPv4", 00:15:49.875 "traddr": "192.168.100.8", 00:15:49.875 "trsvcid": "4420" 00:15:49.875 }, 00:15:49.875 "peer_address": { 00:15:49.875 "trtype": "RDMA", 00:15:49.875 "adrfam": "IPv4", 00:15:49.875 "traddr": "192.168.100.8", 00:15:49.875 "trsvcid": "51009" 00:15:49.875 }, 00:15:49.875 "auth": { 00:15:49.875 "state": "completed", 00:15:49.875 "digest": "sha512", 00:15:49.875 "dhgroup": "null" 00:15:49.875 } 00:15:49.875 } 00:15:49.875 ]' 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.875 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.136 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.136 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.136 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.136 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:50.136 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:51.080 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.341 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.603 00:15:51.603 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.603 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.603 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.864 { 00:15:51.864 "cntlid": 99, 00:15:51.864 "qid": 0, 00:15:51.864 "state": "enabled", 00:15:51.864 "thread": "nvmf_tgt_poll_group_000", 00:15:51.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:51.864 "listen_address": { 00:15:51.864 "trtype": "RDMA", 00:15:51.864 "adrfam": "IPv4", 00:15:51.864 "traddr": "192.168.100.8", 00:15:51.864 "trsvcid": "4420" 00:15:51.864 }, 00:15:51.864 "peer_address": { 00:15:51.864 "trtype": "RDMA", 00:15:51.864 "adrfam": "IPv4", 00:15:51.864 "traddr": "192.168.100.8", 00:15:51.864 "trsvcid": "59763" 00:15:51.864 }, 00:15:51.864 "auth": { 00:15:51.864 "state": "completed", 00:15:51.864 "digest": "sha512", 00:15:51.864 "dhgroup": "null" 00:15:51.864 } 00:15:51.864 } 00:15:51.864 ]' 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.864 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.124 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.124 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.124 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.124 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:52.124 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:53.126 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.462 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.463 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.463 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.463 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.463 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.463 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.463 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.463 18:01:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.740 00:15:53.740 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.740 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.740 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.740 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.740 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.740 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.740 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.740 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.740 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.740 { 00:15:53.740 "cntlid": 101, 00:15:53.740 "qid": 0, 00:15:53.740 "state": "enabled", 00:15:53.740 "thread": "nvmf_tgt_poll_group_000", 00:15:53.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:53.740 "listen_address": { 00:15:53.740 "trtype": "RDMA", 00:15:53.740 "adrfam": "IPv4", 00:15:53.740 "traddr": "192.168.100.8", 00:15:53.740 "trsvcid": "4420" 00:15:53.740 }, 00:15:53.740 "peer_address": { 00:15:53.740 "trtype": "RDMA", 00:15:53.740 "adrfam": "IPv4", 00:15:53.740 "traddr": "192.168.100.8", 00:15:53.740 "trsvcid": "33407" 00:15:53.740 }, 00:15:53.740 "auth": { 00:15:53.740 "state": "completed", 00:15:53.740 "digest": "sha512", 00:15:53.740 "dhgroup": "null" 00:15:53.740 } 00:15:53.740 } 00:15:53.740 ]' 00:15:53.740 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.002 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.002 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.002 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:54.002 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.002 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.002 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.002 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.262 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:54.262 18:01:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:15:55.206 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.206 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:55.206 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.206 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.206 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.206 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.206 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.206 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.467 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.467 18:01:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.727 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.727 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.727 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.727 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.727 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.728 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.728 { 00:15:55.728 "cntlid": 103, 00:15:55.728 "qid": 0, 00:15:55.728 "state": "enabled", 00:15:55.728 "thread": "nvmf_tgt_poll_group_000", 00:15:55.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:55.728 "listen_address": { 00:15:55.728 "trtype": "RDMA", 00:15:55.728 "adrfam": "IPv4", 00:15:55.728 "traddr": "192.168.100.8", 00:15:55.728 "trsvcid": "4420" 00:15:55.728 }, 00:15:55.728 "peer_address": { 00:15:55.728 "trtype": "RDMA", 00:15:55.728 "adrfam": "IPv4", 00:15:55.728 "traddr": "192.168.100.8", 00:15:55.728 "trsvcid": "50040" 00:15:55.728 }, 00:15:55.728 "auth": { 00:15:55.728 "state": "completed", 00:15:55.728 "digest": "sha512", 00:15:55.728 "dhgroup": "null" 00:15:55.728 } 00:15:55.728 } 00:15:55.728 ]' 00:15:55.728 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.728 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.728 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.728 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:55.728 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.988 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.989 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.989 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.989 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:55.989 18:01:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:15:56.930 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.192 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.452 00:15:57.452 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.453 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.453 18:01:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.714 { 00:15:57.714 "cntlid": 105, 00:15:57.714 "qid": 0, 00:15:57.714 "state": "enabled", 00:15:57.714 "thread": "nvmf_tgt_poll_group_000", 00:15:57.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:57.714 "listen_address": { 00:15:57.714 "trtype": "RDMA", 00:15:57.714 "adrfam": "IPv4", 00:15:57.714 "traddr": "192.168.100.8", 00:15:57.714 "trsvcid": "4420" 00:15:57.714 }, 00:15:57.714 "peer_address": { 00:15:57.714 "trtype": "RDMA", 00:15:57.714 "adrfam": "IPv4", 00:15:57.714 "traddr": "192.168.100.8", 00:15:57.714 "trsvcid": "39504" 00:15:57.714 }, 00:15:57.714 "auth": { 00:15:57.714 "state": "completed", 00:15:57.714 "digest": "sha512", 00:15:57.714 "dhgroup": "ffdhe2048" 00:15:57.714 } 00:15:57.714 } 00:15:57.714 ]' 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.714 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.975 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:57.975 18:01:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:15:58.918 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.918 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:58.918 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.918 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.918 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.918 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.918 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.918 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.178 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.439 00:15:59.439 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.439 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.439 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.699 { 00:15:59.699 "cntlid": 107, 00:15:59.699 "qid": 0, 00:15:59.699 "state": "enabled", 00:15:59.699 "thread": "nvmf_tgt_poll_group_000", 00:15:59.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:59.699 "listen_address": { 00:15:59.699 "trtype": "RDMA", 00:15:59.699 "adrfam": "IPv4", 00:15:59.699 "traddr": "192.168.100.8", 00:15:59.699 "trsvcid": "4420" 00:15:59.699 }, 00:15:59.699 "peer_address": { 00:15:59.699 "trtype": "RDMA", 00:15:59.699 "adrfam": "IPv4", 00:15:59.699 "traddr": "192.168.100.8", 00:15:59.699 "trsvcid": "51995" 00:15:59.699 }, 00:15:59.699 "auth": { 00:15:59.699 "state": "completed", 00:15:59.699 "digest": "sha512", 00:15:59.699 "dhgroup": "ffdhe2048" 00:15:59.699 } 00:15:59.699 } 00:15:59.699 ]' 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.699 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.959 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:15:59.959 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:16:00.902 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.902 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:00.902 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.902 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.902 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.902 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.902 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.902 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.161 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.421 00:16:01.421 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.421 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.421 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.681 { 00:16:01.681 "cntlid": 109, 00:16:01.681 "qid": 0, 00:16:01.681 "state": "enabled", 00:16:01.681 "thread": "nvmf_tgt_poll_group_000", 00:16:01.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:01.681 "listen_address": { 00:16:01.681 "trtype": "RDMA", 00:16:01.681 "adrfam": "IPv4", 00:16:01.681 "traddr": "192.168.100.8", 00:16:01.681 "trsvcid": "4420" 00:16:01.681 }, 00:16:01.681 "peer_address": { 00:16:01.681 "trtype": "RDMA", 00:16:01.681 "adrfam": "IPv4", 00:16:01.681 "traddr": "192.168.100.8", 00:16:01.681 "trsvcid": "59568" 00:16:01.681 }, 00:16:01.681 "auth": { 00:16:01.681 "state": "completed", 00:16:01.681 "digest": "sha512", 00:16:01.681 "dhgroup": "ffdhe2048" 00:16:01.681 } 00:16:01.681 } 00:16:01.681 ]' 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.681 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.943 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:16:01.943 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:16:02.886 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.886 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:02.886 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.886 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.886 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.886 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.886 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.886 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.147 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.409 00:16:03.409 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.409 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.409 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.671 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.671 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.671 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.671 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.671 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.671 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.671 { 00:16:03.671 "cntlid": 111, 00:16:03.671 "qid": 0, 00:16:03.671 "state": "enabled", 00:16:03.671 "thread": "nvmf_tgt_poll_group_000", 00:16:03.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:03.671 "listen_address": { 00:16:03.671 "trtype": "RDMA", 00:16:03.671 "adrfam": "IPv4", 00:16:03.671 "traddr": "192.168.100.8", 00:16:03.671 "trsvcid": "4420" 00:16:03.671 }, 00:16:03.671 "peer_address": { 00:16:03.671 "trtype": "RDMA", 00:16:03.671 "adrfam": "IPv4", 00:16:03.671 "traddr": "192.168.100.8", 00:16:03.671 "trsvcid": "44353" 00:16:03.671 }, 00:16:03.671 "auth": { 00:16:03.671 "state": "completed", 00:16:03.671 "digest": "sha512", 00:16:03.671 "dhgroup": "ffdhe2048" 00:16:03.671 } 00:16:03.671 } 00:16:03.671 ]' 00:16:03.671 18:01:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.671 18:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.671 18:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.671 18:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.671 18:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.671 18:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.671 18:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.671 18:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.933 18:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:03.933 18:01:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:04.877 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.877 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:04.877 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.877 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.877 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.877 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.877 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.877 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.877 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.138 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:05.138 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.138 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:05.138 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:05.139 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:05.139 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.139 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.139 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.139 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.139 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.139 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.139 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.139 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.400 00:16:05.400 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.400 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.400 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.400 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.400 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.400 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.400 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.661 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.661 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.661 { 00:16:05.661 "cntlid": 113, 00:16:05.661 "qid": 0, 00:16:05.661 "state": "enabled", 00:16:05.661 "thread": "nvmf_tgt_poll_group_000", 00:16:05.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:05.661 "listen_address": { 00:16:05.661 "trtype": "RDMA", 00:16:05.661 "adrfam": "IPv4", 00:16:05.661 "traddr": "192.168.100.8", 00:16:05.661 "trsvcid": "4420" 00:16:05.661 }, 00:16:05.661 "peer_address": { 00:16:05.661 "trtype": "RDMA", 00:16:05.661 "adrfam": "IPv4", 00:16:05.661 "traddr": "192.168.100.8", 00:16:05.661 "trsvcid": "42665" 00:16:05.661 }, 00:16:05.661 "auth": { 00:16:05.661 "state": "completed", 00:16:05.661 "digest": "sha512", 00:16:05.661 "dhgroup": "ffdhe3072" 00:16:05.661 } 00:16:05.661 } 00:16:05.661 ]' 00:16:05.661 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.661 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.661 18:01:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.661 18:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.661 18:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.661 18:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.661 18:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.661 18:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.923 18:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:16:05.923 18:01:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:16:06.867 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.867 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:06.867 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.867 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.867 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.867 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.867 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.867 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.129 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.391 00:16:07.391 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.391 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.391 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.391 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.391 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.391 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.391 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.391 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.391 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.391 { 00:16:07.391 "cntlid": 115, 00:16:07.391 "qid": 0, 00:16:07.391 "state": "enabled", 00:16:07.391 "thread": "nvmf_tgt_poll_group_000", 00:16:07.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:07.391 "listen_address": { 00:16:07.391 "trtype": "RDMA", 00:16:07.391 "adrfam": "IPv4", 00:16:07.391 "traddr": "192.168.100.8", 00:16:07.391 "trsvcid": "4420" 00:16:07.391 }, 00:16:07.391 "peer_address": { 00:16:07.391 "trtype": "RDMA", 00:16:07.391 "adrfam": "IPv4", 00:16:07.391 "traddr": "192.168.100.8", 00:16:07.391 "trsvcid": "50498" 00:16:07.391 }, 00:16:07.391 "auth": { 00:16:07.391 "state": "completed", 00:16:07.391 "digest": "sha512", 00:16:07.391 "dhgroup": "ffdhe3072" 00:16:07.391 } 00:16:07.391 } 00:16:07.391 ]' 00:16:07.391 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.652 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.652 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.652 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.652 18:01:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.652 18:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.652 18:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.652 18:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.916 18:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:16:07.916 18:01:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:16:08.859 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.859 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.859 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.859 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.859 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.859 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.859 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.859 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.120 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.120 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.382 { 00:16:09.382 "cntlid": 117, 00:16:09.382 "qid": 0, 00:16:09.382 "state": "enabled", 00:16:09.382 "thread": "nvmf_tgt_poll_group_000", 00:16:09.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:09.382 "listen_address": { 00:16:09.382 "trtype": "RDMA", 00:16:09.382 "adrfam": "IPv4", 00:16:09.382 "traddr": "192.168.100.8", 00:16:09.382 "trsvcid": "4420" 00:16:09.382 }, 00:16:09.382 "peer_address": { 00:16:09.382 "trtype": "RDMA", 00:16:09.382 "adrfam": "IPv4", 00:16:09.382 "traddr": "192.168.100.8", 00:16:09.382 "trsvcid": "42654" 00:16:09.382 }, 00:16:09.382 "auth": { 00:16:09.382 "state": "completed", 00:16:09.382 "digest": "sha512", 00:16:09.382 "dhgroup": "ffdhe3072" 00:16:09.382 } 00:16:09.382 } 00:16:09.382 ]' 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.382 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.644 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.644 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.644 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.644 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.644 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.644 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:16:09.905 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.851 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.117 00:16:11.117 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.117 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.117 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.379 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.379 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.379 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.379 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.379 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.379 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.379 { 00:16:11.379 "cntlid": 119, 00:16:11.379 "qid": 0, 00:16:11.379 "state": "enabled", 00:16:11.379 "thread": "nvmf_tgt_poll_group_000", 00:16:11.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:11.379 "listen_address": { 00:16:11.379 "trtype": "RDMA", 00:16:11.379 "adrfam": "IPv4", 00:16:11.379 "traddr": "192.168.100.8", 00:16:11.379 "trsvcid": "4420" 00:16:11.379 }, 00:16:11.379 "peer_address": { 00:16:11.379 "trtype": "RDMA", 00:16:11.379 "adrfam": "IPv4", 00:16:11.379 "traddr": "192.168.100.8", 00:16:11.379 "trsvcid": "57166" 00:16:11.379 }, 00:16:11.379 "auth": { 00:16:11.379 "state": "completed", 00:16:11.379 "digest": "sha512", 00:16:11.379 "dhgroup": "ffdhe3072" 00:16:11.379 } 00:16:11.379 } 00:16:11.379 ]' 00:16:11.379 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.379 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.379 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.641 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.641 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.641 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.641 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.641 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.641 18:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:11.641 18:01:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:12.585 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.845 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.846 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.846 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.846 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.846 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.107 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.368 { 00:16:13.368 "cntlid": 121, 00:16:13.368 "qid": 0, 00:16:13.368 "state": "enabled", 00:16:13.368 "thread": "nvmf_tgt_poll_group_000", 00:16:13.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:13.368 "listen_address": { 00:16:13.368 "trtype": "RDMA", 00:16:13.368 "adrfam": "IPv4", 00:16:13.368 "traddr": "192.168.100.8", 00:16:13.368 "trsvcid": "4420" 00:16:13.368 }, 00:16:13.368 "peer_address": { 00:16:13.368 "trtype": "RDMA", 00:16:13.368 "adrfam": "IPv4", 00:16:13.368 "traddr": "192.168.100.8", 00:16:13.368 "trsvcid": "36819" 00:16:13.368 }, 00:16:13.368 "auth": { 00:16:13.368 "state": "completed", 00:16:13.368 "digest": "sha512", 00:16:13.368 "dhgroup": "ffdhe4096" 00:16:13.368 } 00:16:13.368 } 00:16:13.368 ]' 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.368 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.630 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.630 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.630 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.630 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.630 18:01:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.630 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:16:13.630 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:16:14.574 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.835 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.095 00:16:15.358 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.358 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.358 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.358 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.358 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.358 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.358 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.358 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.358 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.358 { 00:16:15.358 "cntlid": 123, 00:16:15.358 "qid": 0, 00:16:15.358 "state": "enabled", 00:16:15.358 "thread": "nvmf_tgt_poll_group_000", 00:16:15.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:15.358 "listen_address": { 00:16:15.358 "trtype": "RDMA", 00:16:15.358 "adrfam": "IPv4", 00:16:15.358 "traddr": "192.168.100.8", 00:16:15.358 "trsvcid": "4420" 00:16:15.358 }, 00:16:15.358 "peer_address": { 00:16:15.358 "trtype": "RDMA", 00:16:15.358 "adrfam": "IPv4", 00:16:15.358 "traddr": "192.168.100.8", 00:16:15.358 "trsvcid": "36528" 00:16:15.358 }, 00:16:15.358 "auth": { 00:16:15.358 "state": "completed", 00:16:15.358 "digest": "sha512", 00:16:15.358 "dhgroup": "ffdhe4096" 00:16:15.358 } 00:16:15.358 } 00:16:15.358 ]' 00:16:15.358 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.618 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.619 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.619 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.619 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.619 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.619 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.619 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.880 18:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:16:15.880 18:01:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:16:16.820 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.820 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:16.820 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.820 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.820 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.820 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.820 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.820 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.081 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.341 00:16:17.341 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.341 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.341 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.341 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.341 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.341 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.341 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.341 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.341 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.341 { 00:16:17.341 "cntlid": 125, 00:16:17.341 "qid": 0, 00:16:17.341 "state": "enabled", 00:16:17.341 "thread": "nvmf_tgt_poll_group_000", 00:16:17.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:17.341 "listen_address": { 00:16:17.341 "trtype": "RDMA", 00:16:17.341 "adrfam": "IPv4", 00:16:17.341 "traddr": "192.168.100.8", 00:16:17.341 "trsvcid": "4420" 00:16:17.341 }, 00:16:17.341 "peer_address": { 00:16:17.341 "trtype": "RDMA", 00:16:17.341 "adrfam": "IPv4", 00:16:17.341 "traddr": "192.168.100.8", 00:16:17.341 "trsvcid": "39163" 00:16:17.341 }, 00:16:17.341 "auth": { 00:16:17.341 "state": "completed", 00:16:17.341 "digest": "sha512", 00:16:17.341 "dhgroup": "ffdhe4096" 00:16:17.341 } 00:16:17.341 } 00:16:17.341 ]' 00:16:17.341 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.602 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.602 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.602 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:17.602 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.602 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.602 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.602 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.862 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:16:17.863 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:16:18.803 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.803 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:18.804 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.804 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.804 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:18.804 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.804 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.064 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.064 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.064 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.064 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.325 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.325 { 00:16:19.325 "cntlid": 127, 00:16:19.325 "qid": 0, 00:16:19.325 "state": "enabled", 00:16:19.325 "thread": "nvmf_tgt_poll_group_000", 00:16:19.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:19.325 "listen_address": { 00:16:19.325 "trtype": "RDMA", 00:16:19.325 "adrfam": "IPv4", 00:16:19.325 "traddr": "192.168.100.8", 00:16:19.325 "trsvcid": "4420" 00:16:19.325 }, 00:16:19.325 "peer_address": { 00:16:19.325 "trtype": "RDMA", 00:16:19.325 "adrfam": "IPv4", 00:16:19.325 "traddr": "192.168.100.8", 00:16:19.325 "trsvcid": "55650" 00:16:19.325 }, 00:16:19.325 "auth": { 00:16:19.325 "state": "completed", 00:16:19.325 "digest": "sha512", 00:16:19.325 "dhgroup": "ffdhe4096" 00:16:19.325 } 00:16:19.325 } 00:16:19.325 ]' 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.325 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.585 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.585 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.585 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.585 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.585 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.846 18:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:19.846 18:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:20.416 18:01:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.676 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:20.676 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.676 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.676 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.676 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.676 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.676 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:20.676 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.937 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.200 00:16:21.200 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.200 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.200 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.460 { 00:16:21.460 "cntlid": 129, 00:16:21.460 "qid": 0, 00:16:21.460 "state": "enabled", 00:16:21.460 "thread": "nvmf_tgt_poll_group_000", 00:16:21.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:21.460 "listen_address": { 00:16:21.460 "trtype": "RDMA", 00:16:21.460 "adrfam": "IPv4", 00:16:21.460 "traddr": "192.168.100.8", 00:16:21.460 "trsvcid": "4420" 00:16:21.460 }, 00:16:21.460 "peer_address": { 00:16:21.460 "trtype": "RDMA", 00:16:21.460 "adrfam": "IPv4", 00:16:21.460 "traddr": "192.168.100.8", 00:16:21.460 "trsvcid": "42277" 00:16:21.460 }, 00:16:21.460 "auth": { 00:16:21.460 "state": "completed", 00:16:21.460 "digest": "sha512", 00:16:21.460 "dhgroup": "ffdhe6144" 00:16:21.460 } 00:16:21.460 } 00:16:21.460 ]' 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.460 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.721 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.721 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.721 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.721 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:16:21.722 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:16:22.663 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.663 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:22.663 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.663 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.663 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.663 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.663 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:22.663 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.924 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.184 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.445 { 00:16:23.445 "cntlid": 131, 00:16:23.445 "qid": 0, 00:16:23.445 "state": "enabled", 00:16:23.445 "thread": "nvmf_tgt_poll_group_000", 00:16:23.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:23.445 "listen_address": { 00:16:23.445 "trtype": "RDMA", 00:16:23.445 "adrfam": "IPv4", 00:16:23.445 "traddr": "192.168.100.8", 00:16:23.445 "trsvcid": "4420" 00:16:23.445 }, 00:16:23.445 "peer_address": { 00:16:23.445 "trtype": "RDMA", 00:16:23.445 "adrfam": "IPv4", 00:16:23.445 "traddr": "192.168.100.8", 00:16:23.445 "trsvcid": "35273" 00:16:23.445 }, 00:16:23.445 "auth": { 00:16:23.445 "state": "completed", 00:16:23.445 "digest": "sha512", 00:16:23.445 "dhgroup": "ffdhe6144" 00:16:23.445 } 00:16:23.445 } 00:16:23.445 ]' 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.445 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.707 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.707 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.707 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.707 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.707 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.968 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:16:23.968 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.912 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.174 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.174 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.174 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.174 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.435 00:16:25.435 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.435 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.435 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.696 { 00:16:25.696 "cntlid": 133, 00:16:25.696 "qid": 0, 00:16:25.696 "state": "enabled", 00:16:25.696 "thread": "nvmf_tgt_poll_group_000", 00:16:25.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:25.696 "listen_address": { 00:16:25.696 "trtype": "RDMA", 00:16:25.696 "adrfam": "IPv4", 00:16:25.696 "traddr": "192.168.100.8", 00:16:25.696 "trsvcid": "4420" 00:16:25.696 }, 00:16:25.696 "peer_address": { 00:16:25.696 "trtype": "RDMA", 00:16:25.696 "adrfam": "IPv4", 00:16:25.696 "traddr": "192.168.100.8", 00:16:25.696 "trsvcid": "52479" 00:16:25.696 }, 00:16:25.696 "auth": { 00:16:25.696 "state": "completed", 00:16:25.696 "digest": "sha512", 00:16:25.696 "dhgroup": "ffdhe6144" 00:16:25.696 } 00:16:25.696 } 00:16:25.696 ]' 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.696 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.957 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:16:25.957 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:16:26.900 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.900 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:26.900 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.900 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.900 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.900 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.900 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.900 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.161 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.422 00:16:27.423 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.423 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.423 18:01:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.684 { 00:16:27.684 "cntlid": 135, 00:16:27.684 "qid": 0, 00:16:27.684 "state": "enabled", 00:16:27.684 "thread": "nvmf_tgt_poll_group_000", 00:16:27.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:27.684 "listen_address": { 00:16:27.684 "trtype": "RDMA", 00:16:27.684 "adrfam": "IPv4", 00:16:27.684 "traddr": "192.168.100.8", 00:16:27.684 "trsvcid": "4420" 00:16:27.684 }, 00:16:27.684 "peer_address": { 00:16:27.684 "trtype": "RDMA", 00:16:27.684 "adrfam": "IPv4", 00:16:27.684 "traddr": "192.168.100.8", 00:16:27.684 "trsvcid": "42907" 00:16:27.684 }, 00:16:27.684 "auth": { 00:16:27.684 "state": "completed", 00:16:27.684 "digest": "sha512", 00:16:27.684 "dhgroup": "ffdhe6144" 00:16:27.684 } 00:16:27.684 } 00:16:27.684 ]' 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:27.684 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.946 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.946 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.946 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.946 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:27.946 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:28.888 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.888 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:28.888 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.888 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.149 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.722 00:16:29.723 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.723 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.723 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.984 { 00:16:29.984 "cntlid": 137, 00:16:29.984 "qid": 0, 00:16:29.984 "state": "enabled", 00:16:29.984 "thread": "nvmf_tgt_poll_group_000", 00:16:29.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:29.984 "listen_address": { 00:16:29.984 "trtype": "RDMA", 00:16:29.984 "adrfam": "IPv4", 00:16:29.984 "traddr": "192.168.100.8", 00:16:29.984 "trsvcid": "4420" 00:16:29.984 }, 00:16:29.984 "peer_address": { 00:16:29.984 "trtype": "RDMA", 00:16:29.984 "adrfam": "IPv4", 00:16:29.984 "traddr": "192.168.100.8", 00:16:29.984 "trsvcid": "41620" 00:16:29.984 }, 00:16:29.984 "auth": { 00:16:29.984 "state": "completed", 00:16:29.984 "digest": "sha512", 00:16:29.984 "dhgroup": "ffdhe8192" 00:16:29.984 } 00:16:29.984 } 00:16:29.984 ]' 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.984 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.245 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:16:30.245 18:01:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:16:31.208 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.208 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.208 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.208 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.209 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.209 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.209 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:31.209 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.470 18:01:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.258 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.258 { 00:16:32.258 "cntlid": 139, 00:16:32.258 "qid": 0, 00:16:32.258 "state": "enabled", 00:16:32.258 "thread": "nvmf_tgt_poll_group_000", 00:16:32.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:32.258 "listen_address": { 00:16:32.258 "trtype": "RDMA", 00:16:32.258 "adrfam": "IPv4", 00:16:32.258 "traddr": "192.168.100.8", 00:16:32.258 "trsvcid": "4420" 00:16:32.258 }, 00:16:32.258 "peer_address": { 00:16:32.258 "trtype": "RDMA", 00:16:32.258 "adrfam": "IPv4", 00:16:32.258 "traddr": "192.168.100.8", 00:16:32.258 "trsvcid": "51317" 00:16:32.258 }, 00:16:32.258 "auth": { 00:16:32.258 "state": "completed", 00:16:32.258 "digest": "sha512", 00:16:32.258 "dhgroup": "ffdhe8192" 00:16:32.258 } 00:16:32.258 } 00:16:32.258 ]' 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.258 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.518 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:16:32.518 18:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: --dhchap-ctrl-secret DHHC-1:02:MzBjODg3N2ZhMTZlMjZhZjcwNTQ0NTNjMzYxMGU4YzFhMGVhZTdiZTJlODU3NTY2w7nGZw==: 00:16:33.459 18:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.459 18:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:33.459 18:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.459 18:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.459 18:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.459 18:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.459 18:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:33.459 18:01:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.721 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.292 00:16:34.292 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.292 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.292 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.292 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.292 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.292 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.292 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.292 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.292 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.292 { 00:16:34.292 "cntlid": 141, 00:16:34.292 "qid": 0, 00:16:34.292 "state": "enabled", 00:16:34.292 "thread": "nvmf_tgt_poll_group_000", 00:16:34.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:34.292 "listen_address": { 00:16:34.292 "trtype": "RDMA", 00:16:34.292 "adrfam": "IPv4", 00:16:34.292 "traddr": "192.168.100.8", 00:16:34.292 "trsvcid": "4420" 00:16:34.292 }, 00:16:34.292 "peer_address": { 00:16:34.292 "trtype": "RDMA", 00:16:34.292 "adrfam": "IPv4", 00:16:34.292 "traddr": "192.168.100.8", 00:16:34.292 "trsvcid": "56626" 00:16:34.292 }, 00:16:34.292 "auth": { 00:16:34.292 "state": "completed", 00:16:34.292 "digest": "sha512", 00:16:34.292 "dhgroup": "ffdhe8192" 00:16:34.292 } 00:16:34.292 } 00:16:34.292 ]' 00:16:34.553 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.553 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.553 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.553 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.553 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.553 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.553 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.553 18:01:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.814 18:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:16:34.814 18:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:01:Y2IyNDBiMDY0ZjIwNWRkMWE5ZmRkZGZiNDEwYTY3YmLSM+Ch: 00:16:35.758 18:01:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.758 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:35.758 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.758 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.758 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.758 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.758 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.758 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.019 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.592 00:16:36.592 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.592 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.592 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.592 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.592 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.592 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.592 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.592 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.592 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.592 { 00:16:36.592 "cntlid": 143, 00:16:36.592 "qid": 0, 00:16:36.592 "state": "enabled", 00:16:36.592 "thread": "nvmf_tgt_poll_group_000", 00:16:36.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:36.592 "listen_address": { 00:16:36.592 "trtype": "RDMA", 00:16:36.592 "adrfam": "IPv4", 00:16:36.592 "traddr": "192.168.100.8", 00:16:36.592 "trsvcid": "4420" 00:16:36.592 }, 00:16:36.592 "peer_address": { 00:16:36.592 "trtype": "RDMA", 00:16:36.592 "adrfam": "IPv4", 00:16:36.592 "traddr": "192.168.100.8", 00:16:36.592 "trsvcid": "48427" 00:16:36.592 }, 00:16:36.592 "auth": { 00:16:36.592 "state": "completed", 00:16:36.592 "digest": "sha512", 00:16:36.592 "dhgroup": "ffdhe8192" 00:16:36.592 } 00:16:36.592 } 00:16:36.592 ]' 00:16:36.592 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.592 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.592 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.853 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.853 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.853 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.853 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.853 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.853 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:36.853 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:37.797 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.057 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.318 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.318 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.318 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.318 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.888 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.888 { 00:16:38.888 "cntlid": 145, 00:16:38.888 "qid": 0, 00:16:38.888 "state": "enabled", 00:16:38.888 "thread": "nvmf_tgt_poll_group_000", 00:16:38.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:38.888 "listen_address": { 00:16:38.888 "trtype": "RDMA", 00:16:38.888 "adrfam": "IPv4", 00:16:38.888 "traddr": "192.168.100.8", 00:16:38.888 "trsvcid": "4420" 00:16:38.888 }, 00:16:38.888 "peer_address": { 00:16:38.888 "trtype": "RDMA", 00:16:38.888 "adrfam": "IPv4", 00:16:38.888 "traddr": "192.168.100.8", 00:16:38.888 "trsvcid": "53504" 00:16:38.888 }, 00:16:38.888 "auth": { 00:16:38.888 "state": "completed", 00:16:38.888 "digest": "sha512", 00:16:38.888 "dhgroup": "ffdhe8192" 00:16:38.888 } 00:16:38.888 } 00:16:38.888 ]' 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.888 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.150 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.150 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.150 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.150 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.150 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.150 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:16:39.150 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTU3MmFlNzEwOWRmODQxYWM2ODcwN2E3Y2YwMzQ1Mjk3M2FlMmU5MjZkZjkyNTQwdiHT5A==: --dhchap-ctrl-secret DHHC-1:03:ZGRjNjgwYjY4NzMzN2ZhMWNlYTBlZmQ3OGEzMjY2ZjQ4MDk0NmU5Y2E5MTVkZGU2ZWE2YjU2ZjRkNmJmZDFjOSbpG7k=: 00:16:40.090 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:40.351 18:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:40.922 request: 00:16:40.922 { 00:16:40.922 "name": "nvme0", 00:16:40.922 "trtype": "rdma", 00:16:40.922 "traddr": "192.168.100.8", 00:16:40.922 "adrfam": "ipv4", 00:16:40.922 "trsvcid": "4420", 00:16:40.922 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:40.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:40.922 "prchk_reftag": false, 00:16:40.922 "prchk_guard": false, 00:16:40.922 "hdgst": false, 00:16:40.922 "ddgst": false, 00:16:40.922 "dhchap_key": "key2", 00:16:40.922 "allow_unrecognized_csi": false, 00:16:40.922 "method": "bdev_nvme_attach_controller", 00:16:40.922 "req_id": 1 00:16:40.922 } 00:16:40.922 Got JSON-RPC error response 00:16:40.922 response: 00:16:40.922 { 00:16:40.922 "code": -5, 00:16:40.922 "message": "Input/output error" 00:16:40.922 } 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:40.922 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:41.492 request: 00:16:41.492 { 00:16:41.492 "name": "nvme0", 00:16:41.492 "trtype": "rdma", 00:16:41.492 "traddr": "192.168.100.8", 00:16:41.492 "adrfam": "ipv4", 00:16:41.492 "trsvcid": "4420", 00:16:41.492 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:41.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:41.492 "prchk_reftag": false, 00:16:41.492 "prchk_guard": false, 00:16:41.492 "hdgst": false, 00:16:41.492 "ddgst": false, 00:16:41.492 "dhchap_key": "key1", 00:16:41.492 "dhchap_ctrlr_key": "ckey2", 00:16:41.492 "allow_unrecognized_csi": false, 00:16:41.492 "method": "bdev_nvme_attach_controller", 00:16:41.492 "req_id": 1 00:16:41.492 } 00:16:41.492 Got JSON-RPC error response 00:16:41.492 response: 00:16:41.492 { 00:16:41.492 "code": -5, 00:16:41.492 "message": "Input/output error" 00:16:41.492 } 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:41.492 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.493 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:41.493 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.493 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:41.493 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.493 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.493 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.493 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.065 request: 00:16:42.065 { 00:16:42.065 "name": "nvme0", 00:16:42.065 "trtype": "rdma", 00:16:42.065 "traddr": "192.168.100.8", 00:16:42.065 "adrfam": "ipv4", 00:16:42.065 "trsvcid": "4420", 00:16:42.065 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:42.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:42.065 "prchk_reftag": false, 00:16:42.065 "prchk_guard": false, 00:16:42.065 "hdgst": false, 00:16:42.065 "ddgst": false, 00:16:42.065 "dhchap_key": "key1", 00:16:42.065 "dhchap_ctrlr_key": "ckey1", 00:16:42.065 "allow_unrecognized_csi": false, 00:16:42.065 "method": "bdev_nvme_attach_controller", 00:16:42.065 "req_id": 1 00:16:42.065 } 00:16:42.065 Got JSON-RPC error response 00:16:42.065 response: 00:16:42.065 { 00:16:42.065 "code": -5, 00:16:42.065 "message": "Input/output error" 00:16:42.065 } 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3518700 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3518700 ']' 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3518700 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3518700 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3518700' 00:16:42.065 killing process with pid 3518700 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3518700 00:16:42.065 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3518700 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3550223 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3550223 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3550223 ']' 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.327 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3550223 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3550223 ']' 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.268 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.269 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.269 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.269 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.269 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:43.269 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:43.269 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.269 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.269 null0 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nO5 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.CTy ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CTy 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NjA 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.3xd ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3xd 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.QsH 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ua1 ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ua1 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.9kW 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.530 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.531 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.531 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.531 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.474 nvme0n1 00:16:44.474 18:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.474 18:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.474 18:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.735 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.736 { 00:16:44.736 "cntlid": 1, 00:16:44.736 "qid": 0, 00:16:44.736 "state": "enabled", 00:16:44.736 "thread": "nvmf_tgt_poll_group_000", 00:16:44.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:44.736 "listen_address": { 00:16:44.736 "trtype": "RDMA", 00:16:44.736 "adrfam": "IPv4", 00:16:44.736 "traddr": "192.168.100.8", 00:16:44.736 "trsvcid": "4420" 00:16:44.736 }, 00:16:44.736 "peer_address": { 00:16:44.736 "trtype": "RDMA", 00:16:44.736 "adrfam": "IPv4", 00:16:44.736 "traddr": "192.168.100.8", 00:16:44.736 "trsvcid": "36623" 00:16:44.736 }, 00:16:44.736 "auth": { 00:16:44.736 "state": "completed", 00:16:44.736 "digest": "sha512", 00:16:44.736 "dhgroup": "ffdhe8192" 00:16:44.736 } 00:16:44.736 } 00:16:44.736 ]' 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.736 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.996 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:44.996 18:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:45.937 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:46.197 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:46.197 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:46.197 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:46.198 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:46.198 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.198 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:46.198 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.198 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.198 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.198 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.458 request: 00:16:46.458 { 00:16:46.458 "name": "nvme0", 00:16:46.458 "trtype": "rdma", 00:16:46.458 "traddr": "192.168.100.8", 00:16:46.458 "adrfam": "ipv4", 00:16:46.458 "trsvcid": "4420", 00:16:46.458 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:46.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:46.458 "prchk_reftag": false, 00:16:46.458 "prchk_guard": false, 00:16:46.458 "hdgst": false, 00:16:46.458 "ddgst": false, 00:16:46.458 "dhchap_key": "key3", 00:16:46.458 "allow_unrecognized_csi": false, 00:16:46.458 "method": "bdev_nvme_attach_controller", 00:16:46.458 "req_id": 1 00:16:46.458 } 00:16:46.458 Got JSON-RPC error response 00:16:46.458 response: 00:16:46.458 { 00:16:46.458 "code": -5, 00:16:46.458 "message": "Input/output error" 00:16:46.458 } 00:16:46.458 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:46.458 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:46.458 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:46.458 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:46.458 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:46.458 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:46.458 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:46.458 18:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.718 request: 00:16:46.718 { 00:16:46.718 "name": "nvme0", 00:16:46.718 "trtype": "rdma", 00:16:46.718 "traddr": "192.168.100.8", 00:16:46.718 "adrfam": "ipv4", 00:16:46.718 "trsvcid": "4420", 00:16:46.718 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:46.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:46.718 "prchk_reftag": false, 00:16:46.718 "prchk_guard": false, 00:16:46.718 "hdgst": false, 00:16:46.718 "ddgst": false, 00:16:46.718 "dhchap_key": "key3", 00:16:46.718 "allow_unrecognized_csi": false, 00:16:46.718 "method": "bdev_nvme_attach_controller", 00:16:46.718 "req_id": 1 00:16:46.718 } 00:16:46.718 Got JSON-RPC error response 00:16:46.718 response: 00:16:46.718 { 00:16:46.718 "code": -5, 00:16:46.718 "message": "Input/output error" 00:16:46.718 } 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.718 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.978 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:46.978 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.978 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.978 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.978 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:46.978 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.978 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.978 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.978 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:46.979 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:46.979 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:46.979 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:46.979 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.979 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:46.979 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.979 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:46.979 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:46.979 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:47.239 request: 00:16:47.239 { 00:16:47.239 "name": "nvme0", 00:16:47.239 "trtype": "rdma", 00:16:47.239 "traddr": "192.168.100.8", 00:16:47.239 "adrfam": "ipv4", 00:16:47.239 "trsvcid": "4420", 00:16:47.239 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:47.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:47.239 "prchk_reftag": false, 00:16:47.239 "prchk_guard": false, 00:16:47.239 "hdgst": false, 00:16:47.239 "ddgst": false, 00:16:47.239 "dhchap_key": "key0", 00:16:47.239 "dhchap_ctrlr_key": "key1", 00:16:47.239 "allow_unrecognized_csi": false, 00:16:47.239 "method": "bdev_nvme_attach_controller", 00:16:47.239 "req_id": 1 00:16:47.239 } 00:16:47.239 Got JSON-RPC error response 00:16:47.239 response: 00:16:47.239 { 00:16:47.239 "code": -5, 00:16:47.239 "message": "Input/output error" 00:16:47.239 } 00:16:47.498 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:47.498 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.498 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.498 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.498 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:47.498 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:47.499 18:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:47.499 nvme0n1 00:16:47.759 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:47.759 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:47.759 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.759 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.759 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.759 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.019 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:16:48.019 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.019 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.019 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.019 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:48.019 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:48.020 18:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:48.962 nvme0n1 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:48.962 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.221 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.221 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:49.221 18:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: --dhchap-ctrl-secret DHHC-1:03:MDg3ZDUzYWJjMzQ4ZmU0MGM1N2M4YTVjMmUxNjdiMDRkNDA1NjkwYmU3NDgxMGJiN2YzYWFiNmM3Nzk3NzJhOUxPPHI=: 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:50.164 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:50.735 request: 00:16:50.735 { 00:16:50.735 "name": "nvme0", 00:16:50.735 "trtype": "rdma", 00:16:50.735 "traddr": "192.168.100.8", 00:16:50.735 "adrfam": "ipv4", 00:16:50.735 "trsvcid": "4420", 00:16:50.735 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:50.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:50.735 "prchk_reftag": false, 00:16:50.735 "prchk_guard": false, 00:16:50.735 "hdgst": false, 00:16:50.735 "ddgst": false, 00:16:50.735 "dhchap_key": "key1", 00:16:50.735 "allow_unrecognized_csi": false, 00:16:50.735 "method": "bdev_nvme_attach_controller", 00:16:50.735 "req_id": 1 00:16:50.735 } 00:16:50.735 Got JSON-RPC error response 00:16:50.735 response: 00:16:50.735 { 00:16:50.735 "code": -5, 00:16:50.735 "message": "Input/output error" 00:16:50.735 } 00:16:50.735 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:50.735 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:50.735 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:50.735 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:50.736 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:50.736 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:50.736 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:51.679 nvme0n1 00:16:51.679 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:51.679 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:51.679 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.939 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.939 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.939 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.939 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:51.939 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.939 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.939 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.939 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:51.939 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:51.939 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:52.199 nvme0n1 00:16:52.199 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:52.199 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:52.199 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.461 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.461 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.461 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: '' 2s 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: ]] 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGIxYTBmMjY5MTdkYzM3MjQ4MmQxOWFjNTMyMjFiNWMP/WL5: 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:52.722 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: 2s 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: ]] 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjRjZDA4MDAyZTgyMjAwZDk3YWJiYzZiMThlZjA3MTcyMTNjMjk5NGMzYTRhZTQ3df6fcg==: 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:54.662 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:57.204 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:57.776 nvme0n1 00:16:57.776 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.776 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.776 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.776 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.776 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.776 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:58.348 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:58.348 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:58.348 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.608 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.608 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:58.608 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.608 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.608 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.608 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:58.608 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:58.608 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:58.608 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:58.608 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:58.870 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:59.442 request: 00:16:59.442 { 00:16:59.442 "name": "nvme0", 00:16:59.442 "dhchap_key": "key1", 00:16:59.442 "dhchap_ctrlr_key": "key3", 00:16:59.442 "method": "bdev_nvme_set_keys", 00:16:59.442 "req_id": 1 00:16:59.442 } 00:16:59.442 Got JSON-RPC error response 00:16:59.442 response: 00:16:59.442 { 00:16:59.442 "code": -13, 00:16:59.442 "message": "Permission denied" 00:16:59.442 } 00:16:59.442 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:59.442 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.442 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.442 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.442 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:59.442 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:59.442 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.703 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:59.703 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:00.646 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:00.646 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:00.646 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.646 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:00.646 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:02.032 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:02.976 nvme0n1 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:02.976 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:03.238 request: 00:17:03.238 { 00:17:03.238 "name": "nvme0", 00:17:03.238 "dhchap_key": "key2", 00:17:03.238 "dhchap_ctrlr_key": "key0", 00:17:03.238 "method": "bdev_nvme_set_keys", 00:17:03.238 "req_id": 1 00:17:03.238 } 00:17:03.238 Got JSON-RPC error response 00:17:03.238 response: 00:17:03.238 { 00:17:03.238 "code": -13, 00:17:03.238 "message": "Permission denied" 00:17:03.238 } 00:17:03.238 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:03.238 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.238 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.238 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.238 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:03.238 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:03.238 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.500 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:03.500 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:04.443 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:04.443 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:04.443 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.704 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:04.704 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:05.646 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:05.646 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:05.646 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3519015 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3519015 ']' 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3519015 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3519015 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3519015' 00:17:05.906 killing process with pid 3519015 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3519015 00:17:05.906 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3519015 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:06.167 rmmod nvme_rdma 00:17:06.167 rmmod nvme_fabrics 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3550223 ']' 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3550223 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3550223 ']' 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3550223 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3550223 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3550223' 00:17:06.167 killing process with pid 3550223 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3550223 00:17:06.167 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3550223 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.nO5 /tmp/spdk.key-sha256.NjA /tmp/spdk.key-sha384.QsH /tmp/spdk.key-sha512.9kW /tmp/spdk.key-sha512.CTy /tmp/spdk.key-sha384.3xd /tmp/spdk.key-sha256.ua1 '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:17:06.429 00:17:06.429 real 3m5.836s 00:17:06.429 user 6m54.817s 00:17:06.429 sys 0m22.039s 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.429 ************************************ 00:17:06.429 END TEST nvmf_auth_target 00:17:06.429 ************************************ 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:06.429 ************************************ 00:17:06.429 START TEST nvmf_srq_overwhelm 00:17:06.429 ************************************ 00:17:06.429 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:17:06.691 * Looking for test storage... 00:17:06.691 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:06.691 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:06.691 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lcov --version 00:17:06.691 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:06.691 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:06.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.692 --rc genhtml_branch_coverage=1 00:17:06.692 --rc genhtml_function_coverage=1 00:17:06.692 --rc genhtml_legend=1 00:17:06.692 --rc geninfo_all_blocks=1 00:17:06.692 --rc geninfo_unexecuted_blocks=1 00:17:06.692 00:17:06.692 ' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:06.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.692 --rc genhtml_branch_coverage=1 00:17:06.692 --rc genhtml_function_coverage=1 00:17:06.692 --rc genhtml_legend=1 00:17:06.692 --rc geninfo_all_blocks=1 00:17:06.692 --rc geninfo_unexecuted_blocks=1 00:17:06.692 00:17:06.692 ' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:06.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.692 --rc genhtml_branch_coverage=1 00:17:06.692 --rc genhtml_function_coverage=1 00:17:06.692 --rc genhtml_legend=1 00:17:06.692 --rc geninfo_all_blocks=1 00:17:06.692 --rc geninfo_unexecuted_blocks=1 00:17:06.692 00:17:06.692 ' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:06.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.692 --rc genhtml_branch_coverage=1 00:17:06.692 --rc genhtml_function_coverage=1 00:17:06.692 --rc genhtml_legend=1 00:17:06.692 --rc geninfo_all_blocks=1 00:17:06.692 --rc geninfo_unexecuted_blocks=1 00:17:06.692 00:17:06.692 ' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.692 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:17:06.692 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:17:06.693 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:14.834 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:17:14.835 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:17:14.835 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:17:14.835 Found net devices under 0000:98:00.0: mlx_0_0 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:17:14.835 Found net devices under 0000:98:00.1: mlx_0_1 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:14.835 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:14.836 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:14.836 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:17:14.836 altname enp152s0f0np0 00:17:14.836 altname ens817f0np0 00:17:14.836 inet 192.168.100.8/24 scope global mlx_0_0 00:17:14.836 valid_lft forever preferred_lft forever 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:14.836 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:14.836 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:17:14.836 altname enp152s0f1np1 00:17:14.836 altname ens817f1np1 00:17:14.836 inet 192.168.100.9/24 scope global mlx_0_1 00:17:14.836 valid_lft forever preferred_lft forever 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:14.836 192.168.100.9' 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:14.836 192.168.100.9' 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:14.836 192.168.100.9' 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:17:14.836 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=3559263 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 3559263 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 3559263 ']' 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.097 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:15.097 [2024-11-20 18:02:33.463958] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:15.097 [2024-11-20 18:02:33.464033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.097 [2024-11-20 18:02:33.548162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:15.097 [2024-11-20 18:02:33.590906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.097 [2024-11-20 18:02:33.590944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.097 [2024-11-20 18:02:33.590952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.097 [2024-11-20 18:02:33.590959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.097 [2024-11-20 18:02:33.590965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.097 [2024-11-20 18:02:33.592588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.097 [2024-11-20 18:02:33.592705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.097 [2024-11-20 18:02:33.592866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.097 [2024-11-20 18:02:33.592876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.038 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.038 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:17:16.038 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:16.039 [2024-11-20 18:02:34.357440] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2433f30/0x2438420) succeed. 00:17:16.039 [2024-11-20 18:02:34.372073] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24355c0/0x2479ac0) succeed. 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:16.039 Malloc0 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:16.039 [2024-11-20 18:02:34.483632] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.039 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:17:17.422 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:17:17.422 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:17.422 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:17.422 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.683 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:17.683 Malloc1 00:17:17.683 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.683 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:17.683 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.683 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:17.683 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.683 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:17.683 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.683 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:17.683 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.683 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:19.064 Malloc2 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.064 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:17:20.650 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:17:20.650 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:20.650 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:20.650 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:17:20.650 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:20.650 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:17:20.650 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:20.650 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:20.650 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:20.650 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.650 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:20.650 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.650 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:20.650 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:20.651 Malloc3 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.651 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:17:22.036 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:17:22.036 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:22.036 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:22.036 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:22.297 Malloc4 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.297 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:23.683 Malloc5 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.683 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:17:25.625 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:17:25.625 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:25.625 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:17:25.625 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:25.625 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:17:25.625 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:25.625 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:25.625 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:17:25.625 [global] 00:17:25.625 thread=1 00:17:25.625 invalidate=1 00:17:25.625 rw=read 00:17:25.625 time_based=1 00:17:25.625 runtime=10 00:17:25.625 ioengine=libaio 00:17:25.625 direct=1 00:17:25.625 bs=1048576 00:17:25.625 iodepth=128 00:17:25.625 norandommap=1 00:17:25.625 numjobs=13 00:17:25.625 00:17:25.625 [job0] 00:17:25.625 filename=/dev/nvme0n1 00:17:25.625 [job1] 00:17:25.625 filename=/dev/nvme1n1 00:17:25.625 [job2] 00:17:25.625 filename=/dev/nvme2n1 00:17:25.625 [job3] 00:17:25.625 filename=/dev/nvme3n1 00:17:25.625 [job4] 00:17:25.625 filename=/dev/nvme4n1 00:17:25.625 [job5] 00:17:25.625 filename=/dev/nvme5n1 00:17:25.625 Could not set queue depth (nvme0n1) 00:17:25.625 Could not set queue depth (nvme1n1) 00:17:25.625 Could not set queue depth (nvme2n1) 00:17:25.625 Could not set queue depth (nvme3n1) 00:17:25.625 Could not set queue depth (nvme4n1) 00:17:25.625 Could not set queue depth (nvme5n1) 00:17:25.886 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:25.886 ... 00:17:25.886 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:25.886 ... 00:17:25.886 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:25.886 ... 00:17:25.886 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:25.886 ... 00:17:25.886 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:25.886 ... 00:17:25.886 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:25.886 ... 00:17:25.886 fio-3.35 00:17:25.886 Starting 78 threads 00:17:40.806 00:17:40.806 job0: (groupid=0, jobs=1): err= 0: pid=3561673: Wed Nov 20 18:02:58 2024 00:17:40.806 read: IOPS=29, BW=29.9MiB/s (31.4MB/s)(302MiB/10091msec) 00:17:40.806 slat (usec): min=46, max=3247.0k, avg=33114.53, stdev=254942.82 00:17:40.806 clat (msec): min=89, max=8836, avg=4006.80, stdev=3827.07 00:17:40.806 lat (msec): min=91, max=8839, avg=4039.91, stdev=3830.63 00:17:40.806 clat percentiles (msec): 00:17:40.806 | 1.00th=[ 95], 5.00th=[ 300], 10.00th=[ 430], 20.00th=[ 743], 00:17:40.806 | 30.00th=[ 793], 40.00th=[ 877], 50.00th=[ 995], 60.00th=[ 8221], 00:17:40.806 | 70.00th=[ 8288], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[ 8792], 00:17:40.806 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:40.806 | 99.99th=[ 8792] 00:17:40.806 bw ( KiB/s): min= 2043, max=161792, per=1.75%, avg=59142.67, stdev=62994.30, samples=6 00:17:40.806 iops : min= 1, max= 158, avg=57.50, stdev=61.66, samples=6 00:17:40.806 lat (msec) : 100=1.32%, 250=2.32%, 500=6.95%, 750=12.25%, 1000=27.81% 00:17:40.807 lat (msec) : 2000=6.62%, >=2000=42.72% 00:17:40.807 cpu : usr=0.00%, sys=1.25%, ctx=443, majf=0, minf=32769 00:17:40.807 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.1% 00:17:40.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.807 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:40.807 issued rwts: total=302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.807 job0: (groupid=0, jobs=1): err= 0: pid=3561674: Wed Nov 20 18:02:58 2024 00:17:40.807 read: IOPS=86, BW=86.3MiB/s (90.5MB/s)(1055MiB/12222msec) 00:17:40.807 slat (usec): min=62, max=2119.0k, avg=9523.97, stdev=91500.54 00:17:40.807 clat (msec): min=318, max=6749, avg=1422.13, stdev=1877.39 00:17:40.807 lat (msec): min=322, max=6750, avg=1431.65, stdev=1882.53 00:17:40.807 clat percentiles (msec): 00:17:40.807 | 1.00th=[ 347], 5.00th=[ 435], 10.00th=[ 567], 20.00th=[ 625], 00:17:40.807 | 30.00th=[ 634], 40.00th=[ 751], 50.00th=[ 793], 60.00th=[ 827], 00:17:40.807 | 70.00th=[ 844], 80.00th=[ 860], 90.00th=[ 6477], 95.00th=[ 6611], 00:17:40.807 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:17:40.807 | 99.99th=[ 6745] 00:17:40.807 bw ( KiB/s): min= 1783, max=233472, per=4.34%, avg=146114.23, stdev=75237.38, samples=13 00:17:40.807 iops : min= 1, max= 228, avg=142.46, stdev=73.67, samples=13 00:17:40.807 lat (msec) : 500=7.20%, 750=32.99%, 1000=46.82%, >=2000=12.99% 00:17:40.807 cpu : usr=0.07%, sys=2.18%, ctx=1875, majf=0, minf=32769 00:17:40.807 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:17:40.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.807 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.808 issued rwts: total=1055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.808 job0: (groupid=0, jobs=1): err= 0: pid=3561675: Wed Nov 20 18:02:58 2024 00:17:40.808 read: IOPS=22, BW=22.4MiB/s (23.5MB/s)(316MiB/14076msec) 00:17:40.808 slat (usec): min=22, max=2141.8k, avg=37825.02, stdev=220953.32 00:17:40.808 clat (msec): min=1748, max=10686, avg=5275.23, stdev=3772.66 00:17:40.808 lat (msec): min=1752, max=10686, avg=5313.06, stdev=3775.31 00:17:40.808 clat percentiles (msec): 00:17:40.808 | 1.00th=[ 1770], 5.00th=[ 1787], 10.00th=[ 1821], 20.00th=[ 1888], 00:17:40.808 | 30.00th=[ 2005], 40.00th=[ 2140], 50.00th=[ 2366], 60.00th=[ 6409], 00:17:40.808 | 70.00th=[ 9866], 80.00th=[10134], 90.00th=[10402], 95.00th=[10537], 00:17:40.808 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:17:40.808 | 99.99th=[10671] 00:17:40.808 bw ( KiB/s): min= 2043, max=122880, per=1.15%, avg=38698.50, stdev=40744.29, samples=10 00:17:40.808 iops : min= 1, max= 120, avg=37.60, stdev=39.97, samples=10 00:17:40.808 lat (msec) : 2000=29.43%, >=2000=70.57% 00:17:40.808 cpu : usr=0.00%, sys=0.64%, ctx=610, majf=0, minf=32769 00:17:40.808 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.1%, >=64=80.1% 00:17:40.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.808 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:40.808 issued rwts: total=316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.809 job0: (groupid=0, jobs=1): err= 0: pid=3561676: Wed Nov 20 18:02:58 2024 00:17:40.809 read: IOPS=17, BW=17.5MiB/s (18.3MB/s)(252MiB/14410msec) 00:17:40.809 slat (usec): min=31, max=2165.3k, avg=40361.34, stdev=250253.89 00:17:40.809 clat (msec): min=2486, max=10469, avg=6508.98, stdev=2230.44 00:17:40.809 lat (msec): min=2488, max=10470, avg=6549.34, stdev=2223.72 00:17:40.809 clat percentiles (msec): 00:17:40.809 | 1.00th=[ 2500], 5.00th=[ 2567], 10.00th=[ 2567], 20.00th=[ 4597], 00:17:40.809 | 30.00th=[ 5805], 40.00th=[ 5873], 50.00th=[ 6208], 60.00th=[ 8087], 00:17:40.809 | 70.00th=[ 8154], 80.00th=[ 8356], 90.00th=[ 8557], 95.00th=[ 8557], 00:17:40.809 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10537], 99.95th=[10537], 00:17:40.809 | 99.99th=[10537] 00:17:40.809 bw ( KiB/s): min= 1484, max=167936, per=1.26%, avg=42572.67, stdev=65739.33, samples=6 00:17:40.809 iops : min= 1, max= 164, avg=41.50, stdev=64.25, samples=6 00:17:40.809 lat (msec) : >=2000=100.00% 00:17:40.809 cpu : usr=0.01%, sys=1.02%, ctx=224, majf=0, minf=32769 00:17:40.809 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.7%, >=64=75.0% 00:17:40.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.810 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:17:40.810 issued rwts: total=252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.810 job0: (groupid=0, jobs=1): err= 0: pid=3561677: Wed Nov 20 18:02:58 2024 00:17:40.810 read: IOPS=9, BW=9827KiB/s (10.1MB/s)(138MiB/14380msec) 00:17:40.810 slat (usec): min=252, max=3371.7k, avg=73478.68, stdev=410352.26 00:17:40.810 clat (msec): min=4237, max=14337, avg=9467.88, stdev=3802.10 00:17:40.810 lat (msec): min=6018, max=14341, avg=9541.36, stdev=3797.84 00:17:40.810 clat percentiles (msec): 00:17:40.810 | 1.00th=[ 6007], 5.00th=[ 6007], 10.00th=[ 6074], 20.00th=[ 6141], 00:17:40.810 | 30.00th=[ 6275], 40.00th=[ 6275], 50.00th=[ 6342], 60.00th=[ 8490], 00:17:40.810 | 70.00th=[14160], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:17:40.810 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:17:40.810 | 99.99th=[14295] 00:17:40.810 bw ( KiB/s): min= 1549, max=20480, per=0.33%, avg=11014.50, stdev=13386.24, samples=2 00:17:40.810 iops : min= 1, max= 20, avg=10.50, stdev=13.44, samples=2 00:17:40.810 lat (msec) : >=2000=100.00% 00:17:40.810 cpu : usr=0.00%, sys=1.02%, ctx=148, majf=0, minf=32769 00:17:40.811 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.8%, 16=11.6%, 32=23.2%, >=64=54.3% 00:17:40.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.811 complete : 0=0.0%, 4=91.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=8.3% 00:17:40.811 issued rwts: total=138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.811 job0: (groupid=0, jobs=1): err= 0: pid=3561678: Wed Nov 20 18:02:58 2024 00:17:40.811 read: IOPS=49, BW=49.2MiB/s (51.6MB/s)(597MiB/12136msec) 00:17:40.811 slat (usec): min=23, max=2146.6k, avg=16750.89, stdev=143217.34 00:17:40.811 clat (msec): min=630, max=8554, avg=2276.47, stdev=1764.36 00:17:40.811 lat (msec): min=631, max=9897, avg=2293.22, stdev=1773.19 00:17:40.811 clat percentiles (msec): 00:17:40.811 | 1.00th=[ 634], 5.00th=[ 634], 10.00th=[ 634], 20.00th=[ 634], 00:17:40.811 | 30.00th=[ 667], 40.00th=[ 693], 50.00th=[ 768], 60.00th=[ 2769], 00:17:40.811 | 70.00th=[ 4144], 80.00th=[ 4178], 90.00th=[ 4530], 95.00th=[ 4732], 00:17:40.811 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 8557], 99.95th=[ 8557], 00:17:40.811 | 99.99th=[ 8557] 00:17:40.811 bw ( KiB/s): min= 2043, max=217088, per=3.17%, avg=106859.89, stdev=83057.74, samples=9 00:17:40.811 iops : min= 1, max= 212, avg=104.22, stdev=81.29, samples=9 00:17:40.811 lat (msec) : 750=47.91%, 1000=3.18%, 2000=3.18%, >=2000=45.73% 00:17:40.811 cpu : usr=0.03%, sys=1.18%, ctx=557, majf=0, minf=32769 00:17:40.811 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:17:40.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.811 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.811 issued rwts: total=597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.811 job0: (groupid=0, jobs=1): err= 0: pid=3561680: Wed Nov 20 18:02:58 2024 00:17:40.811 read: IOPS=93, BW=93.7MiB/s (98.3MB/s)(1331MiB/14202msec) 00:17:40.811 slat (usec): min=28, max=2126.8k, avg=9070.21, stdev=79498.17 00:17:40.811 clat (msec): min=406, max=7656, avg=1312.18, stdev=1764.51 00:17:40.811 lat (msec): min=407, max=7658, avg=1321.25, stdev=1771.38 00:17:40.811 clat percentiles (msec): 00:17:40.812 | 1.00th=[ 409], 5.00th=[ 414], 10.00th=[ 443], 20.00th=[ 592], 00:17:40.812 | 30.00th=[ 600], 40.00th=[ 701], 50.00th=[ 751], 60.00th=[ 785], 00:17:40.812 | 70.00th=[ 818], 80.00th=[ 844], 90.00th=[ 3339], 95.00th=[ 7349], 00:17:40.812 | 99.00th=[ 7617], 99.50th=[ 7617], 99.90th=[ 7684], 99.95th=[ 7684], 00:17:40.812 | 99.99th=[ 7684] 00:17:40.812 bw ( KiB/s): min= 2048, max=311296, per=4.29%, avg=144735.41, stdev=85355.46, samples=17 00:17:40.812 iops : min= 2, max= 304, avg=141.24, stdev=83.38, samples=17 00:17:40.812 lat (msec) : 500=13.45%, 750=37.34%, 1000=35.84%, >=2000=13.37% 00:17:40.812 cpu : usr=0.07%, sys=1.75%, ctx=2208, majf=0, minf=32769 00:17:40.812 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:17:40.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.812 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.812 issued rwts: total=1331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.812 job0: (groupid=0, jobs=1): err= 0: pid=3561681: Wed Nov 20 18:02:58 2024 00:17:40.812 read: IOPS=34, BW=34.5MiB/s (36.2MB/s)(420MiB/12175msec) 00:17:40.812 slat (usec): min=36, max=2144.3k, avg=23817.09, stdev=191616.05 00:17:40.812 clat (msec): min=497, max=10444, avg=3548.21, stdev=4333.99 00:17:40.812 lat (msec): min=500, max=10450, avg=3572.02, stdev=4343.83 00:17:40.812 clat percentiles (msec): 00:17:40.812 | 1.00th=[ 502], 5.00th=[ 523], 10.00th=[ 558], 20.00th=[ 659], 00:17:40.813 | 30.00th=[ 693], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 743], 00:17:40.813 | 70.00th=[ 6141], 80.00th=[10134], 90.00th=[10268], 95.00th=[10402], 00:17:40.813 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:17:40.813 | 99.99th=[10402] 00:17:40.813 bw ( KiB/s): min= 1845, max=227328, per=2.22%, avg=74982.00, stdev=89649.08, samples=8 00:17:40.813 iops : min= 1, max= 222, avg=73.00, stdev=87.76, samples=8 00:17:40.813 lat (msec) : 500=0.48%, 750=62.38%, 1000=5.95%, 2000=0.24%, >=2000=30.95% 00:17:40.813 cpu : usr=0.02%, sys=1.33%, ctx=425, majf=0, minf=32769 00:17:40.813 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:17:40.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.813 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:40.813 issued rwts: total=420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.813 job0: (groupid=0, jobs=1): err= 0: pid=3561682: Wed Nov 20 18:02:58 2024 00:17:40.813 read: IOPS=20, BW=20.0MiB/s (21.0MB/s)(286MiB/14280msec) 00:17:40.814 slat (usec): min=25, max=4304.7k, avg=42526.54, stdev=341516.51 00:17:40.814 clat (msec): min=1118, max=14041, avg=5636.48, stdev=4361.73 00:17:40.814 lat (msec): min=1124, max=14069, avg=5679.00, stdev=4373.88 00:17:40.814 clat percentiles (msec): 00:17:40.814 | 1.00th=[ 1116], 5.00th=[ 1519], 10.00th=[ 1552], 20.00th=[ 1603], 00:17:40.814 | 30.00th=[ 1703], 40.00th=[ 1804], 50.00th=[ 2467], 60.00th=[10134], 00:17:40.814 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10537], 95.00th=[10671], 00:17:40.814 | 99.00th=[12818], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:40.814 | 99.99th=[14026] 00:17:40.814 bw ( KiB/s): min= 2048, max=155648, per=1.61%, avg=54272.67, stdev=61408.59, samples=6 00:17:40.814 iops : min= 2, max= 152, avg=53.00, stdev=59.97, samples=6 00:17:40.814 lat (msec) : 2000=45.80%, >=2000=54.20% 00:17:40.814 cpu : usr=0.00%, sys=0.84%, ctx=425, majf=0, minf=32769 00:17:40.814 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=78.0% 00:17:40.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.814 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:40.815 issued rwts: total=286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.815 job0: (groupid=0, jobs=1): err= 0: pid=3561683: Wed Nov 20 18:02:58 2024 00:17:40.815 read: IOPS=11, BW=11.6MiB/s (12.2MB/s)(164MiB/14084msec) 00:17:40.815 slat (usec): min=67, max=6434.2k, avg=72934.00, stdev=560741.83 00:17:40.815 clat (msec): min=699, max=13855, avg=10613.62, stdev=5281.05 00:17:40.815 lat (msec): min=701, max=13860, avg=10686.56, stdev=5242.24 00:17:40.815 clat percentiles (msec): 00:17:40.815 | 1.00th=[ 701], 5.00th=[ 701], 10.00th=[ 709], 20.00th=[ 743], 00:17:40.815 | 30.00th=[13221], 40.00th=[13221], 50.00th=[13355], 60.00th=[13489], 00:17:40.815 | 70.00th=[13624], 80.00th=[13624], 90.00th=[13758], 95.00th=[13758], 00:17:40.815 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:17:40.815 | 99.99th=[13892] 00:17:40.815 bw ( KiB/s): min= 1996, max=69632, per=0.56%, avg=18929.75, stdev=33801.51, samples=4 00:17:40.815 iops : min= 1, max= 68, avg=18.00, stdev=33.34, samples=4 00:17:40.815 lat (msec) : 750=20.73%, >=2000=79.27% 00:17:40.815 cpu : usr=0.00%, sys=0.67%, ctx=185, majf=0, minf=32769 00:17:40.815 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.9%, 16=9.8%, 32=19.5%, >=64=61.6% 00:17:40.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.815 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:17:40.815 issued rwts: total=164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.816 job0: (groupid=0, jobs=1): err= 0: pid=3561684: Wed Nov 20 18:02:58 2024 00:17:40.816 read: IOPS=2, BW=2652KiB/s (2716kB/s)(37.0MiB/14286msec) 00:17:40.816 slat (usec): min=762, max=2144.7k, avg=271758.62, stdev=683499.37 00:17:40.816 clat (msec): min=4230, max=14284, avg=12867.43, stdev=2743.46 00:17:40.816 lat (msec): min=6366, max=14285, avg=13139.18, stdev=2331.16 00:17:40.816 clat percentiles (msec): 00:17:40.816 | 1.00th=[ 4245], 5.00th=[ 6342], 10.00th=[ 6409], 20.00th=[12818], 00:17:40.816 | 30.00th=[14026], 40.00th=[14160], 50.00th=[14160], 60.00th=[14160], 00:17:40.816 | 70.00th=[14295], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:17:40.816 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:17:40.816 | 99.99th=[14295] 00:17:40.816 lat (msec) : >=2000=100.00% 00:17:40.816 cpu : usr=0.00%, sys=0.43%, ctx=82, majf=0, minf=9473 00:17:40.816 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:17:40.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.816 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:40.816 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.816 job0: (groupid=0, jobs=1): err= 0: pid=3561685: Wed Nov 20 18:02:58 2024 00:17:40.816 read: IOPS=1, BW=1593KiB/s (1631kB/s)(22.0MiB/14141msec) 00:17:40.817 slat (usec): min=1024, max=4266.4k, avg=546222.02, stdev=1132542.35 00:17:40.817 clat (msec): min=2122, max=14138, avg=12342.38, stdev=3568.51 00:17:40.817 lat (msec): min=4263, max=14139, avg=12888.60, stdev=2757.07 00:17:40.817 clat percentiles (msec): 00:17:40.817 | 1.00th=[ 2123], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[12818], 00:17:40.817 | 30.00th=[12818], 40.00th=[14160], 50.00th=[14160], 60.00th=[14160], 00:17:40.817 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:17:40.817 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:17:40.817 | 99.99th=[14160] 00:17:40.817 lat (msec) : >=2000=100.00% 00:17:40.817 cpu : usr=0.00%, sys=0.21%, ctx=71, majf=0, minf=5633 00:17:40.817 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:17:40.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.817 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:40.817 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.817 job0: (groupid=0, jobs=1): err= 0: pid=3561686: Wed Nov 20 18:02:58 2024 00:17:40.817 read: IOPS=0, BW=508KiB/s (520kB/s)(7168KiB/14123msec) 00:17:40.817 slat (msec): min=14, max=4257, avg=1715.19, stdev=1906.39 00:17:40.817 clat (msec): min=2116, max=14056, avg=10113.74, stdev=4586.32 00:17:40.817 lat (msec): min=6361, max=14122, avg=11828.92, stdev=3101.84 00:17:40.817 clat percentiles (msec): 00:17:40.818 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 6342], 00:17:40.818 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[12818], 60.00th=[12818], 00:17:40.818 | 70.00th=[12818], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:17:40.818 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:40.818 | 99.99th=[14026] 00:17:40.818 lat (msec) : >=2000=100.00% 00:17:40.818 cpu : usr=0.00%, sys=0.05%, ctx=61, majf=0, minf=1793 00:17:40.818 IO depths : 1=14.3%, 2=28.6%, 4=57.1%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:40.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.818 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.818 issued rwts: total=7,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.819 job1: (groupid=0, jobs=1): err= 0: pid=3561704: Wed Nov 20 18:02:58 2024 00:17:40.819 read: IOPS=63, BW=63.3MiB/s (66.4MB/s)(774MiB/12226msec) 00:17:40.819 slat (usec): min=33, max=1652.6k, avg=13006.59, stdev=91566.10 00:17:40.819 clat (msec): min=669, max=6229, avg=1940.38, stdev=1725.70 00:17:40.819 lat (msec): min=674, max=6231, avg=1953.38, stdev=1732.20 00:17:40.819 clat percentiles (msec): 00:17:40.819 | 1.00th=[ 684], 5.00th=[ 726], 10.00th=[ 751], 20.00th=[ 810], 00:17:40.819 | 30.00th=[ 844], 40.00th=[ 852], 50.00th=[ 860], 60.00th=[ 894], 00:17:40.819 | 70.00th=[ 2635], 80.00th=[ 2836], 90.00th=[ 5537], 95.00th=[ 5873], 00:17:40.819 | 99.00th=[ 6141], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:17:40.819 | 99.99th=[ 6208] 00:17:40.819 bw ( KiB/s): min= 1783, max=198656, per=2.81%, avg=94597.43, stdev=65928.89, samples=14 00:17:40.819 iops : min= 1, max= 194, avg=92.14, stdev=64.56, samples=14 00:17:40.819 lat (msec) : 750=10.21%, 1000=54.26%, >=2000=35.53% 00:17:40.819 cpu : usr=0.05%, sys=1.74%, ctx=1533, majf=0, minf=32769 00:17:40.819 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:17:40.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.819 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.819 issued rwts: total=774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.819 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.819 job1: (groupid=0, jobs=1): err= 0: pid=3561706: Wed Nov 20 18:02:58 2024 00:17:40.819 read: IOPS=70, BW=70.8MiB/s (74.3MB/s)(716MiB/10109msec) 00:17:40.819 slat (usec): min=28, max=2073.3k, avg=13961.85, stdev=89680.85 00:17:40.819 clat (msec): min=108, max=5766, avg=1467.96, stdev=1357.32 00:17:40.819 lat (msec): min=110, max=5769, avg=1481.92, stdev=1366.27 00:17:40.819 clat percentiles (msec): 00:17:40.819 | 1.00th=[ 127], 5.00th=[ 347], 10.00th=[ 592], 20.00th=[ 751], 00:17:40.819 | 30.00th=[ 818], 40.00th=[ 869], 50.00th=[ 944], 60.00th=[ 1099], 00:17:40.819 | 70.00th=[ 1167], 80.00th=[ 1770], 90.00th=[ 4799], 95.00th=[ 5134], 00:17:40.819 | 99.00th=[ 5470], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:17:40.820 | 99.99th=[ 5738] 00:17:40.820 bw ( KiB/s): min=30720, max=188416, per=3.25%, avg=109623.45, stdev=60324.93, samples=11 00:17:40.820 iops : min= 30, max= 184, avg=106.91, stdev=58.94, samples=11 00:17:40.820 lat (msec) : 250=3.63%, 500=4.47%, 750=10.34%, 1000=36.59%, 2000=28.07% 00:17:40.820 lat (msec) : >=2000=16.90% 00:17:40.820 cpu : usr=0.06%, sys=1.81%, ctx=1098, majf=0, minf=32769 00:17:40.820 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:17:40.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.820 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.820 issued rwts: total=716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.820 job1: (groupid=0, jobs=1): err= 0: pid=3561707: Wed Nov 20 18:02:58 2024 00:17:40.820 read: IOPS=62, BW=63.0MiB/s (66.1MB/s)(634MiB/10064msec) 00:17:40.820 slat (usec): min=30, max=2138.1k, avg=15771.23, stdev=113839.17 00:17:40.820 clat (msec): min=60, max=5803, avg=1344.81, stdev=1424.45 00:17:40.820 lat (msec): min=111, max=5825, avg=1360.58, stdev=1438.60 00:17:40.820 clat percentiles (msec): 00:17:40.820 | 1.00th=[ 122], 5.00th=[ 207], 10.00th=[ 405], 20.00th=[ 584], 00:17:40.820 | 30.00th=[ 592], 40.00th=[ 592], 50.00th=[ 600], 60.00th=[ 642], 00:17:40.820 | 70.00th=[ 667], 80.00th=[ 2534], 90.00th=[ 3272], 95.00th=[ 5537], 00:17:40.821 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:17:40.821 | 99.99th=[ 5805] 00:17:40.821 bw ( KiB/s): min= 6144, max=223232, per=3.58%, avg=120520.71, stdev=96024.40, samples=7 00:17:40.821 iops : min= 6, max= 218, avg=117.57, stdev=93.86, samples=7 00:17:40.821 lat (msec) : 100=0.16%, 250=4.89%, 500=9.78%, 750=56.31%, >=2000=28.86% 00:17:40.821 cpu : usr=0.00%, sys=1.64%, ctx=905, majf=0, minf=32769 00:17:40.821 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:17:40.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.821 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.821 issued rwts: total=634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.821 job1: (groupid=0, jobs=1): err= 0: pid=3561708: Wed Nov 20 18:02:58 2024 00:17:40.821 read: IOPS=95, BW=95.3MiB/s (99.9MB/s)(1350MiB/14167msec) 00:17:40.821 slat (usec): min=28, max=2104.7k, avg=8903.39, stdev=98356.05 00:17:40.821 clat (msec): min=298, max=8858, avg=1294.85, stdev=2358.01 00:17:40.821 lat (msec): min=300, max=8859, avg=1303.76, stdev=2365.77 00:17:40.821 clat percentiles (msec): 00:17:40.821 | 1.00th=[ 300], 5.00th=[ 300], 10.00th=[ 305], 20.00th=[ 305], 00:17:40.821 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 535], 60.00th=[ 776], 00:17:40.821 | 70.00th=[ 810], 80.00th=[ 827], 90.00th=[ 2140], 95.00th=[ 8658], 00:17:40.821 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8926], 00:17:40.821 | 99.99th=[ 8926] 00:17:40.821 bw ( KiB/s): min= 2048, max=425984, per=5.31%, avg=178884.93, stdev=153509.85, samples=14 00:17:40.821 iops : min= 2, max= 416, avg=174.64, stdev=149.97, samples=14 00:17:40.821 lat (msec) : 500=49.41%, 750=5.41%, 1000=35.11%, >=2000=10.07% 00:17:40.821 cpu : usr=0.01%, sys=1.48%, ctx=1880, majf=0, minf=32769 00:17:40.821 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:17:40.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.821 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.821 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.821 job1: (groupid=0, jobs=1): err= 0: pid=3561709: Wed Nov 20 18:02:58 2024 00:17:40.821 read: IOPS=20, BW=20.8MiB/s (21.8MB/s)(254MiB/12221msec) 00:17:40.821 slat (usec): min=667, max=2201.0k, avg=39584.92, stdev=239035.71 00:17:40.821 clat (msec): min=943, max=11274, avg=5893.81, stdev=3717.11 00:17:40.822 lat (msec): min=946, max=11276, avg=5933.40, stdev=3722.52 00:17:40.822 clat percentiles (msec): 00:17:40.822 | 1.00th=[ 953], 5.00th=[ 986], 10.00th=[ 1003], 20.00th=[ 1028], 00:17:40.822 | 30.00th=[ 4212], 40.00th=[ 5470], 50.00th=[ 5537], 60.00th=[ 6141], 00:17:40.822 | 70.00th=[ 7080], 80.00th=[10939], 90.00th=[11073], 95.00th=[11208], 00:17:40.822 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:17:40.822 | 99.99th=[11208] 00:17:40.822 bw ( KiB/s): min= 1686, max=120832, per=0.96%, avg=32457.00, stdev=40832.74, samples=8 00:17:40.822 iops : min= 1, max= 118, avg=31.38, stdev=40.04, samples=8 00:17:40.822 lat (msec) : 1000=7.87%, 2000=18.11%, >=2000=74.02% 00:17:40.822 cpu : usr=0.02%, sys=1.28%, ctx=651, majf=0, minf=32769 00:17:40.822 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.6%, >=64=75.2% 00:17:40.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.822 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:17:40.822 issued rwts: total=254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.822 job1: (groupid=0, jobs=1): err= 0: pid=3561710: Wed Nov 20 18:02:58 2024 00:17:40.822 read: IOPS=16, BW=16.3MiB/s (17.1MB/s)(201MiB/12310msec) 00:17:40.822 slat (usec): min=323, max=2184.7k, avg=50546.63, stdev=298269.14 00:17:40.822 clat (msec): min=1082, max=11831, avg=7507.70, stdev=4730.09 00:17:40.822 lat (msec): min=1091, max=11845, avg=7558.24, stdev=4721.87 00:17:40.822 clat percentiles (msec): 00:17:40.822 | 1.00th=[ 1099], 5.00th=[ 1133], 10.00th=[ 1183], 20.00th=[ 1200], 00:17:40.822 | 30.00th=[ 1217], 40.00th=[10805], 50.00th=[10939], 60.00th=[11073], 00:17:40.822 | 70.00th=[11208], 80.00th=[11342], 90.00th=[11610], 95.00th=[11745], 00:17:40.822 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:17:40.822 | 99.99th=[11879] 00:17:40.822 bw ( KiB/s): min= 1499, max=129024, per=0.64%, avg=21571.86, stdev=47480.25, samples=7 00:17:40.822 iops : min= 1, max= 126, avg=21.00, stdev=46.40, samples=7 00:17:40.822 lat (msec) : 2000=33.83%, >=2000=66.17% 00:17:40.822 cpu : usr=0.00%, sys=1.28%, ctx=487, majf=0, minf=32769 00:17:40.822 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=15.9%, >=64=68.7% 00:17:40.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.822 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:17:40.822 issued rwts: total=201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.822 job1: (groupid=0, jobs=1): err= 0: pid=3561711: Wed Nov 20 18:02:58 2024 00:17:40.822 read: IOPS=19, BW=19.4MiB/s (20.4MB/s)(280MiB/14414msec) 00:17:40.822 slat (usec): min=50, max=2148.5k, avg=36378.63, stdev=234532.47 00:17:40.822 clat (msec): min=832, max=13363, avg=6399.98, stdev=5735.22 00:17:40.822 lat (msec): min=833, max=13368, avg=6436.36, stdev=5745.50 00:17:40.822 clat percentiles (msec): 00:17:40.822 | 1.00th=[ 844], 5.00th=[ 927], 10.00th=[ 1003], 20.00th=[ 1116], 00:17:40.822 | 30.00th=[ 1167], 40.00th=[ 1183], 50.00th=[ 1234], 60.00th=[12281], 00:17:40.822 | 70.00th=[12550], 80.00th=[12953], 90.00th=[13087], 95.00th=[13221], 00:17:40.822 | 99.00th=[13355], 99.50th=[13355], 99.90th=[13355], 99.95th=[13355], 00:17:40.822 | 99.99th=[13355] 00:17:40.823 bw ( KiB/s): min= 1467, max=124928, per=1.16%, avg=39095.38, stdev=45933.35, samples=8 00:17:40.823 iops : min= 1, max= 122, avg=38.12, stdev=44.91, samples=8 00:17:40.823 lat (msec) : 1000=9.64%, 2000=42.86%, >=2000=47.50% 00:17:40.823 cpu : usr=0.00%, sys=1.40%, ctx=497, majf=0, minf=32769 00:17:40.823 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.7%, 32=11.4%, >=64=77.5% 00:17:40.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.823 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:40.823 issued rwts: total=280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.823 job1: (groupid=0, jobs=1): err= 0: pid=3561712: Wed Nov 20 18:02:58 2024 00:17:40.823 read: IOPS=36, BW=36.5MiB/s (38.2MB/s)(441MiB/12098msec) 00:17:40.823 slat (usec): min=46, max=2174.6k, avg=22676.87, stdev=171380.28 00:17:40.823 clat (msec): min=644, max=10327, avg=3369.15, stdev=3149.36 00:17:40.823 lat (msec): min=647, max=10332, avg=3391.82, stdev=3163.85 00:17:40.823 clat percentiles (msec): 00:17:40.823 | 1.00th=[ 651], 5.00th=[ 667], 10.00th=[ 684], 20.00th=[ 693], 00:17:40.823 | 30.00th=[ 709], 40.00th=[ 735], 50.00th=[ 2089], 60.00th=[ 4396], 00:17:40.823 | 70.00th=[ 4530], 80.00th=[ 6007], 90.00th=[ 8557], 95.00th=[10268], 00:17:40.823 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:17:40.823 | 99.99th=[10268] 00:17:40.823 bw ( KiB/s): min= 1865, max=188416, per=2.12%, avg=71432.11, stdev=70324.84, samples=9 00:17:40.823 iops : min= 1, max= 184, avg=69.67, stdev=68.78, samples=9 00:17:40.823 lat (msec) : 750=45.35%, 1000=4.54%, >=2000=50.11% 00:17:40.823 cpu : usr=0.05%, sys=1.32%, ctx=535, majf=0, minf=32769 00:17:40.823 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.3%, >=64=85.7% 00:17:40.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.823 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:40.823 issued rwts: total=441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.823 job1: (groupid=0, jobs=1): err= 0: pid=3561713: Wed Nov 20 18:02:58 2024 00:17:40.823 read: IOPS=23, BW=23.9MiB/s (25.1MB/s)(335MiB/14022msec) 00:17:40.823 slat (usec): min=33, max=2089.9k, avg=35459.63, stdev=219293.35 00:17:40.823 clat (msec): min=1188, max=11569, avg=5027.27, stdev=4349.35 00:17:40.823 lat (msec): min=1193, max=11573, avg=5062.73, stdev=4355.73 00:17:40.823 clat percentiles (msec): 00:17:40.823 | 1.00th=[ 1200], 5.00th=[ 1250], 10.00th=[ 1267], 20.00th=[ 1368], 00:17:40.823 | 30.00th=[ 1469], 40.00th=[ 1536], 50.00th=[ 1586], 60.00th=[ 4245], 00:17:40.823 | 70.00th=[ 8557], 80.00th=[10939], 90.00th=[11208], 95.00th=[11476], 00:17:40.823 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:17:40.823 | 99.99th=[11610] 00:17:40.823 bw ( KiB/s): min= 2052, max=112640, per=1.40%, avg=47196.56, stdev=44690.86, samples=9 00:17:40.823 iops : min= 2, max= 110, avg=45.89, stdev=43.83, samples=9 00:17:40.823 lat (msec) : 2000=53.73%, >=2000=46.27% 00:17:40.823 cpu : usr=0.03%, sys=0.79%, ctx=705, majf=0, minf=32769 00:17:40.823 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.2% 00:17:40.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.823 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:40.823 issued rwts: total=335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.823 job1: (groupid=0, jobs=1): err= 0: pid=3561714: Wed Nov 20 18:02:58 2024 00:17:40.823 read: IOPS=9, BW=9569KiB/s (9798kB/s)(132MiB/14126msec) 00:17:40.823 slat (usec): min=441, max=4268.0k, avg=90813.79, stdev=482441.56 00:17:40.823 clat (msec): min=2137, max=13976, avg=12625.16, stdev=2177.28 00:17:40.823 lat (msec): min=4207, max=13999, avg=12715.97, stdev=1975.38 00:17:40.823 clat percentiles (msec): 00:17:40.823 | 1.00th=[ 4212], 5.00th=[ 8490], 10.00th=[ 9866], 20.00th=[12953], 00:17:40.823 | 30.00th=[13087], 40.00th=[13221], 50.00th=[13355], 60.00th=[13489], 00:17:40.823 | 70.00th=[13624], 80.00th=[13758], 90.00th=[13758], 95.00th=[13892], 00:17:40.823 | 99.00th=[13892], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:40.823 | 99.99th=[14026] 00:17:40.823 bw ( KiB/s): min= 2043, max= 8062, per=0.15%, avg=5052.50, stdev=4256.08, samples=2 00:17:40.823 iops : min= 1, max= 7, avg= 4.00, stdev= 4.24, samples=2 00:17:40.823 lat (msec) : >=2000=100.00% 00:17:40.823 cpu : usr=0.00%, sys=0.47%, ctx=431, majf=0, minf=32769 00:17:40.823 IO depths : 1=0.8%, 2=1.5%, 4=3.0%, 8=6.1%, 16=12.1%, 32=24.2%, >=64=52.3% 00:17:40.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.823 complete : 0=0.0%, 4=83.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=16.7% 00:17:40.823 issued rwts: total=132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.823 job1: (groupid=0, jobs=1): err= 0: pid=3561715: Wed Nov 20 18:02:58 2024 00:17:40.823 read: IOPS=123, BW=124MiB/s (130MB/s)(1244MiB/10046msec) 00:17:40.823 slat (usec): min=24, max=2015.5k, avg=8041.43, stdev=58308.40 00:17:40.823 clat (msec): min=34, max=3248, avg=959.92, stdev=815.78 00:17:40.823 lat (msec): min=55, max=3275, avg=967.97, stdev=819.89 00:17:40.823 clat percentiles (msec): 00:17:40.823 | 1.00th=[ 68], 5.00th=[ 222], 10.00th=[ 489], 20.00th=[ 514], 00:17:40.823 | 30.00th=[ 567], 40.00th=[ 592], 50.00th=[ 600], 60.00th=[ 651], 00:17:40.823 | 70.00th=[ 1045], 80.00th=[ 1250], 90.00th=[ 3138], 95.00th=[ 3205], 00:17:40.823 | 99.00th=[ 3239], 99.50th=[ 3239], 99.90th=[ 3239], 99.95th=[ 3239], 00:17:40.823 | 99.99th=[ 3239] 00:17:40.823 bw ( KiB/s): min=14336, max=264192, per=4.51%, avg=152140.33, stdev=78819.55, samples=15 00:17:40.823 iops : min= 14, max= 258, avg=148.53, stdev=76.92, samples=15 00:17:40.823 lat (msec) : 50=0.08%, 100=1.45%, 250=3.54%, 500=14.15%, 750=46.30% 00:17:40.823 lat (msec) : 1000=3.38%, 2000=20.90%, >=2000=10.21% 00:17:40.823 cpu : usr=0.03%, sys=1.93%, ctx=1626, majf=0, minf=32769 00:17:40.823 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:17:40.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.823 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.823 issued rwts: total=1244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.823 job1: (groupid=0, jobs=1): err= 0: pid=3561716: Wed Nov 20 18:02:58 2024 00:17:40.823 read: IOPS=56, BW=56.0MiB/s (58.8MB/s)(676MiB/12063msec) 00:17:40.823 slat (usec): min=26, max=1721.2k, avg=14851.38, stdev=105401.51 00:17:40.823 clat (msec): min=412, max=8677, avg=2147.79, stdev=2381.21 00:17:40.823 lat (msec): min=414, max=8678, avg=2162.64, stdev=2392.22 00:17:40.824 clat percentiles (msec): 00:17:40.824 | 1.00th=[ 418], 5.00th=[ 418], 10.00th=[ 422], 20.00th=[ 456], 00:17:40.824 | 30.00th=[ 506], 40.00th=[ 676], 50.00th=[ 1133], 60.00th=[ 1234], 00:17:40.824 | 70.00th=[ 2635], 80.00th=[ 4111], 90.00th=[ 6208], 95.00th=[ 8221], 00:17:40.824 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:17:40.824 | 99.99th=[ 8658] 00:17:40.824 bw ( KiB/s): min= 2000, max=282624, per=2.78%, avg=93676.08, stdev=103555.86, samples=12 00:17:40.824 iops : min= 1, max= 276, avg=91.33, stdev=101.27, samples=12 00:17:40.824 lat (msec) : 500=28.11%, 750=12.57%, 1000=5.47%, 2000=22.04%, >=2000=31.80% 00:17:40.824 cpu : usr=0.04%, sys=1.19%, ctx=1116, majf=0, minf=32769 00:17:40.824 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:17:40.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.824 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.824 issued rwts: total=676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.824 job1: (groupid=0, jobs=1): err= 0: pid=3561717: Wed Nov 20 18:02:58 2024 00:17:40.824 read: IOPS=22, BW=22.9MiB/s (24.0MB/s)(322MiB/14068msec) 00:17:40.824 slat (usec): min=31, max=2130.0k, avg=37045.91, stdev=238844.24 00:17:40.824 clat (msec): min=690, max=8860, avg=5389.91, stdev=2788.36 00:17:40.824 lat (msec): min=695, max=8860, avg=5426.96, stdev=2792.40 00:17:40.824 clat percentiles (msec): 00:17:40.824 | 1.00th=[ 693], 5.00th=[ 701], 10.00th=[ 709], 20.00th=[ 3071], 00:17:40.824 | 30.00th=[ 4279], 40.00th=[ 5067], 50.00th=[ 5269], 60.00th=[ 5403], 00:17:40.824 | 70.00th=[ 7886], 80.00th=[ 8423], 90.00th=[ 8557], 95.00th=[ 8792], 00:17:40.824 | 99.00th=[ 8792], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:17:40.824 | 99.99th=[ 8926] 00:17:40.824 bw ( KiB/s): min= 2048, max=118784, per=1.44%, avg=48452.12, stdev=44051.98, samples=8 00:17:40.824 iops : min= 2, max= 116, avg=47.25, stdev=43.00, samples=8 00:17:40.824 lat (msec) : 750=17.70%, 1000=1.55%, >=2000=80.75% 00:17:40.824 cpu : usr=0.01%, sys=0.78%, ctx=426, majf=0, minf=32769 00:17:40.824 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=9.9%, >=64=80.4% 00:17:40.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.824 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:40.824 issued rwts: total=322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.824 job2: (groupid=0, jobs=1): err= 0: pid=3561726: Wed Nov 20 18:02:58 2024 00:17:40.824 read: IOPS=102, BW=102MiB/s (107MB/s)(1024MiB/10024msec) 00:17:40.824 slat (usec): min=26, max=1148.3k, avg=9762.27, stdev=38323.27 00:17:40.824 clat (msec): min=21, max=3282, avg=987.03, stdev=518.08 00:17:40.824 lat (msec): min=23, max=3285, avg=996.79, stdev=524.77 00:17:40.824 clat percentiles (msec): 00:17:40.824 | 1.00th=[ 40], 5.00th=[ 124], 10.00th=[ 342], 20.00th=[ 592], 00:17:40.824 | 30.00th=[ 835], 40.00th=[ 936], 50.00th=[ 986], 60.00th=[ 1045], 00:17:40.824 | 70.00th=[ 1083], 80.00th=[ 1183], 90.00th=[ 1703], 95.00th=[ 2022], 00:17:40.824 | 99.00th=[ 3171], 99.50th=[ 3205], 99.90th=[ 3272], 99.95th=[ 3272], 00:17:40.824 | 99.99th=[ 3272] 00:17:40.824 bw ( KiB/s): min=45056, max=210944, per=3.55%, avg=119683.92, stdev=41499.14, samples=13 00:17:40.824 iops : min= 44, max= 206, avg=116.77, stdev=40.45, samples=13 00:17:40.824 lat (msec) : 50=2.05%, 100=1.86%, 250=4.49%, 500=5.96%, 750=11.82% 00:17:40.824 lat (msec) : 1000=26.07%, 2000=41.60%, >=2000=6.15% 00:17:40.824 cpu : usr=0.04%, sys=1.78%, ctx=1963, majf=0, minf=32769 00:17:40.824 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:17:40.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.824 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.824 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.824 job2: (groupid=0, jobs=1): err= 0: pid=3561727: Wed Nov 20 18:02:58 2024 00:17:40.824 read: IOPS=17, BW=17.5MiB/s (18.3MB/s)(246MiB/14078msec) 00:17:40.824 slat (usec): min=414, max=2127.4k, avg=48522.25, stdev=251897.09 00:17:40.824 clat (msec): min=1619, max=11359, avg=6255.77, stdev=4221.86 00:17:40.824 lat (msec): min=1640, max=11361, avg=6304.29, stdev=4215.36 00:17:40.824 clat percentiles (msec): 00:17:40.824 | 1.00th=[ 1636], 5.00th=[ 1653], 10.00th=[ 1687], 20.00th=[ 1770], 00:17:40.824 | 30.00th=[ 1838], 40.00th=[ 1871], 50.00th=[ 7148], 60.00th=[ 9866], 00:17:40.824 | 70.00th=[10268], 80.00th=[10671], 90.00th=[11073], 95.00th=[11208], 00:17:40.824 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:17:40.824 | 99.99th=[11342] 00:17:40.824 bw ( KiB/s): min= 2043, max=73728, per=0.90%, avg=30433.50, stdev=30841.59, samples=8 00:17:40.824 iops : min= 1, max= 72, avg=29.50, stdev=30.33, samples=8 00:17:40.824 lat (msec) : 2000=42.28%, >=2000=57.72% 00:17:40.824 cpu : usr=0.00%, sys=0.66%, ctx=664, majf=0, minf=32769 00:17:40.824 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.3%, 16=6.5%, 32=13.0%, >=64=74.4% 00:17:40.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.824 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:17:40.824 issued rwts: total=246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.824 job2: (groupid=0, jobs=1): err= 0: pid=3561728: Wed Nov 20 18:02:58 2024 00:17:40.824 read: IOPS=60, BW=60.2MiB/s (63.1MB/s)(606MiB/10070msec) 00:17:40.824 slat (usec): min=27, max=2047.7k, avg=16511.10, stdev=104008.54 00:17:40.824 clat (msec): min=59, max=4288, avg=1732.04, stdev=1033.33 00:17:40.824 lat (msec): min=72, max=4290, avg=1748.55, stdev=1036.30 00:17:40.824 clat percentiles (msec): 00:17:40.824 | 1.00th=[ 83], 5.00th=[ 477], 10.00th=[ 919], 20.00th=[ 1011], 00:17:40.824 | 30.00th=[ 1062], 40.00th=[ 1200], 50.00th=[ 1401], 60.00th=[ 1737], 00:17:40.824 | 70.00th=[ 2140], 80.00th=[ 2265], 90.00th=[ 4077], 95.00th=[ 4178], 00:17:40.824 | 99.00th=[ 4245], 99.50th=[ 4245], 99.90th=[ 4279], 99.95th=[ 4279], 00:17:40.824 | 99.99th=[ 4279] 00:17:40.824 bw ( KiB/s): min=24576, max=165556, per=2.64%, avg=89104.36, stdev=41608.63, samples=11 00:17:40.824 iops : min= 24, max= 161, avg=86.73, stdev=40.55, samples=11 00:17:40.824 lat (msec) : 100=1.82%, 250=1.98%, 500=1.65%, 750=3.96%, 1000=8.25% 00:17:40.824 lat (msec) : 2000=47.69%, >=2000=34.65% 00:17:40.824 cpu : usr=0.01%, sys=2.11%, ctx=926, majf=0, minf=32769 00:17:40.824 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:17:40.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.824 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.824 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.824 job2: (groupid=0, jobs=1): err= 0: pid=3561729: Wed Nov 20 18:02:58 2024 00:17:40.824 read: IOPS=6, BW=6571KiB/s (6728kB/s)(78.0MiB/12156msec) 00:17:40.824 slat (usec): min=1005, max=2110.5k, avg=128375.67, stdev=477645.25 00:17:40.824 clat (msec): min=2142, max=12152, avg=10383.40, stdev=2931.24 00:17:40.824 lat (msec): min=2157, max=12155, avg=10511.78, stdev=2781.04 00:17:40.824 clat percentiles (msec): 00:17:40.824 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 8557], 00:17:40.824 | 30.00th=[10671], 40.00th=[11879], 50.00th=[12013], 60.00th=[12013], 00:17:40.824 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:17:40.824 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:17:40.824 | 99.99th=[12147] 00:17:40.824 lat (msec) : >=2000=100.00% 00:17:40.824 cpu : usr=0.01%, sys=0.96%, ctx=108, majf=0, minf=19969 00:17:40.824 IO depths : 1=1.3%, 2=2.6%, 4=5.1%, 8=10.3%, 16=20.5%, 32=41.0%, >=64=19.2% 00:17:40.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.824 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:40.824 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.824 job2: (groupid=0, jobs=1): err= 0: pid=3561730: Wed Nov 20 18:02:58 2024 00:17:40.824 read: IOPS=1, BW=1806KiB/s (1849kB/s)(25.0MiB/14178msec) 00:17:40.824 slat (msec): min=4, max=4250, avg=481.55, stdev=1060.02 00:17:40.824 clat (msec): min=2138, max=14165, avg=10648.26, stdev=4340.40 00:17:40.824 lat (msec): min=4199, max=14177, avg=11129.80, stdev=4012.36 00:17:40.824 clat percentiles (msec): 00:17:40.824 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4279], 00:17:40.824 | 30.00th=[ 6409], 40.00th=[12818], 50.00th=[12818], 60.00th=[14026], 00:17:40.824 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:17:40.825 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:17:40.825 | 99.99th=[14160] 00:17:40.825 lat (msec) : >=2000=100.00% 00:17:40.825 cpu : usr=0.00%, sys=0.27%, ctx=82, majf=0, minf=6401 00:17:40.825 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:17:40.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.825 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:40.825 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.825 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.825 job2: (groupid=0, jobs=1): err= 0: pid=3561731: Wed Nov 20 18:02:58 2024 00:17:40.825 read: IOPS=172, BW=172MiB/s (180MB/s)(1741MiB/10117msec) 00:17:40.825 slat (usec): min=33, max=1011.0k, avg=5737.84, stdev=25862.20 00:17:40.825 clat (msec): min=114, max=2042, avg=711.16, stdev=471.04 00:17:40.825 lat (msec): min=130, max=2093, avg=716.90, stdev=473.72 00:17:40.825 clat percentiles (msec): 00:17:40.825 | 1.00th=[ 243], 5.00th=[ 259], 10.00th=[ 262], 20.00th=[ 264], 00:17:40.825 | 30.00th=[ 275], 40.00th=[ 477], 50.00th=[ 718], 60.00th=[ 760], 00:17:40.825 | 70.00th=[ 860], 80.00th=[ 1028], 90.00th=[ 1167], 95.00th=[ 1972], 00:17:40.825 | 99.00th=[ 2039], 99.50th=[ 2039], 99.90th=[ 2039], 99.95th=[ 2039], 00:17:40.825 | 99.99th=[ 2039] 00:17:40.825 bw ( KiB/s): min=26570, max=497664, per=5.45%, avg=183619.56, stdev=132642.33, samples=18 00:17:40.825 iops : min= 25, max= 486, avg=179.22, stdev=129.62, samples=18 00:17:40.825 lat (msec) : 250=1.38%, 500=40.26%, 750=13.67%, 1000=23.26%, 2000=17.86% 00:17:40.825 lat (msec) : >=2000=3.56% 00:17:40.825 cpu : usr=0.19%, sys=3.22%, ctx=1696, majf=0, minf=32769 00:17:40.825 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:17:40.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.825 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.825 issued rwts: total=1741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.825 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.825 job2: (groupid=0, jobs=1): err= 0: pid=3561732: Wed Nov 20 18:02:58 2024 00:17:40.825 read: IOPS=1, BW=1959KiB/s (2006kB/s)(27.0MiB/14113msec) 00:17:40.825 slat (usec): min=1950, max=2121.9k, avg=443504.49, stdev=833862.33 00:17:40.825 clat (msec): min=2138, max=14098, avg=10334.08, stdev=3625.13 00:17:40.825 lat (msec): min=4213, max=14112, avg=10777.59, stdev=3301.94 00:17:40.825 clat percentiles (msec): 00:17:40.825 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:17:40.825 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12818], 00:17:40.825 | 70.00th=[12818], 80.00th=[14026], 90.00th=[14026], 95.00th=[14160], 00:17:40.825 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:17:40.825 | 99.99th=[14160] 00:17:40.825 lat (msec) : >=2000=100.00% 00:17:40.825 cpu : usr=0.00%, sys=0.23%, ctx=73, majf=0, minf=6913 00:17:40.825 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:17:40.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.825 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:40.825 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.825 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.825 job2: (groupid=0, jobs=1): err= 0: pid=3561733: Wed Nov 20 18:02:58 2024 00:17:40.825 read: IOPS=3, BW=3452KiB/s (3535kB/s)(41.0MiB/12162msec) 00:17:40.825 slat (msec): min=2, max=2144, avg=244.54, stdev=646.96 00:17:40.825 clat (msec): min=2135, max=12156, avg=9964.08, stdev=3180.15 00:17:40.825 lat (msec): min=2166, max=12161, avg=10208.63, stdev=2939.48 00:17:40.825 clat percentiles (msec): 00:17:40.825 | 1.00th=[ 2140], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6477], 00:17:40.825 | 30.00th=[ 8557], 40.00th=[11879], 50.00th=[12013], 60.00th=[12013], 00:17:40.825 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:17:40.825 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:17:40.825 | 99.99th=[12147] 00:17:40.825 lat (msec) : >=2000=100.00% 00:17:40.825 cpu : usr=0.01%, sys=0.60%, ctx=91, majf=0, minf=10497 00:17:40.825 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:17:40.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.826 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:40.826 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.826 job2: (groupid=0, jobs=1): err= 0: pid=3561735: Wed Nov 20 18:02:58 2024 00:17:40.826 read: IOPS=71, BW=71.2MiB/s (74.7MB/s)(713MiB/10015msec) 00:17:40.826 slat (usec): min=23, max=2048.7k, avg=14021.04, stdev=90246.00 00:17:40.826 clat (msec): min=14, max=5307, avg=1000.21, stdev=464.12 00:17:40.826 lat (msec): min=14, max=5345, avg=1014.23, stdev=492.40 00:17:40.826 clat percentiles (msec): 00:17:40.826 | 1.00th=[ 34], 5.00th=[ 122], 10.00th=[ 351], 20.00th=[ 718], 00:17:40.826 | 30.00th=[ 869], 40.00th=[ 961], 50.00th=[ 1011], 60.00th=[ 1062], 00:17:40.826 | 70.00th=[ 1200], 80.00th=[ 1351], 90.00th=[ 1418], 95.00th=[ 1636], 00:17:40.826 | 99.00th=[ 1905], 99.50th=[ 2072], 99.90th=[ 5336], 99.95th=[ 5336], 00:17:40.826 | 99.99th=[ 5336] 00:17:40.826 bw ( KiB/s): min=32768, max=202752, per=3.35%, avg=112867.56, stdev=51916.34, samples=9 00:17:40.826 iops : min= 32, max= 198, avg=110.22, stdev=50.70, samples=9 00:17:40.826 lat (msec) : 20=0.70%, 50=1.54%, 100=2.10%, 250=3.93%, 500=4.35% 00:17:40.826 lat (msec) : 750=9.26%, 1000=24.40%, 2000=53.16%, >=2000=0.56% 00:17:40.826 cpu : usr=0.06%, sys=0.85%, ctx=1349, majf=0, minf=32769 00:17:40.826 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:17:40.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.826 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.826 issued rwts: total=713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.826 job2: (groupid=0, jobs=1): err= 0: pid=3561736: Wed Nov 20 18:02:58 2024 00:17:40.826 read: IOPS=56, BW=56.8MiB/s (59.6MB/s)(805MiB/14167msec) 00:17:40.826 slat (usec): min=30, max=2072.2k, avg=14934.16, stdev=88364.15 00:17:40.826 clat (msec): min=647, max=6193, avg=1901.36, stdev=1649.44 00:17:40.826 lat (msec): min=674, max=6196, avg=1916.29, stdev=1654.18 00:17:40.826 clat percentiles (msec): 00:17:40.826 | 1.00th=[ 676], 5.00th=[ 684], 10.00th=[ 726], 20.00th=[ 953], 00:17:40.826 | 30.00th=[ 1045], 40.00th=[ 1099], 50.00th=[ 1150], 60.00th=[ 1200], 00:17:40.826 | 70.00th=[ 1670], 80.00th=[ 2089], 90.00th=[ 5403], 95.00th=[ 5805], 00:17:40.826 | 99.00th=[ 6074], 99.50th=[ 6141], 99.90th=[ 6208], 99.95th=[ 6208], 00:17:40.826 | 99.99th=[ 6208] 00:17:40.826 bw ( KiB/s): min= 2048, max=204800, per=2.75%, avg=92545.67, stdev=57080.14, samples=15 00:17:40.826 iops : min= 2, max= 200, avg=90.33, stdev=55.81, samples=15 00:17:40.826 lat (msec) : 750=12.17%, 1000=9.44%, 2000=53.54%, >=2000=24.84% 00:17:40.826 cpu : usr=0.02%, sys=1.20%, ctx=1784, majf=0, minf=32769 00:17:40.826 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:17:40.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.826 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.826 issued rwts: total=805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.826 job2: (groupid=0, jobs=1): err= 0: pid=3561737: Wed Nov 20 18:02:58 2024 00:17:40.826 read: IOPS=143, BW=143MiB/s (150MB/s)(1441MiB/10073msec) 00:17:40.826 slat (usec): min=32, max=814676, avg=6942.30, stdev=23339.54 00:17:40.826 clat (msec): min=58, max=2838, avg=866.61, stdev=619.81 00:17:40.826 lat (msec): min=73, max=2841, avg=873.55, stdev=623.77 00:17:40.826 clat percentiles (msec): 00:17:40.826 | 1.00th=[ 130], 5.00th=[ 338], 10.00th=[ 422], 20.00th=[ 477], 00:17:40.826 | 30.00th=[ 542], 40.00th=[ 584], 50.00th=[ 642], 60.00th=[ 684], 00:17:40.826 | 70.00th=[ 768], 80.00th=[ 1099], 90.00th=[ 1737], 95.00th=[ 2433], 00:17:40.826 | 99.00th=[ 2769], 99.50th=[ 2769], 99.90th=[ 2836], 99.95th=[ 2836], 00:17:40.826 | 99.99th=[ 2836] 00:17:40.826 bw ( KiB/s): min=10240, max=311296, per=4.20%, avg=141482.58, stdev=87611.66, samples=19 00:17:40.826 iops : min= 10, max= 304, avg=138.11, stdev=85.51, samples=19 00:17:40.826 lat (msec) : 100=0.49%, 250=3.12%, 500=18.95%, 750=44.97%, 1000=8.26% 00:17:40.826 lat (msec) : 2000=15.41%, >=2000=8.81% 00:17:40.826 cpu : usr=0.09%, sys=3.31%, ctx=1885, majf=0, minf=32770 00:17:40.826 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:17:40.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.826 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.826 issued rwts: total=1441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.826 job2: (groupid=0, jobs=1): err= 0: pid=3561738: Wed Nov 20 18:02:58 2024 00:17:40.826 read: IOPS=71, BW=71.7MiB/s (75.2MB/s)(726MiB/10124msec) 00:17:40.826 slat (usec): min=37, max=2043.6k, avg=13793.01, stdev=85607.19 00:17:40.826 clat (msec): min=106, max=3105, avg=1702.05, stdev=829.06 00:17:40.826 lat (msec): min=153, max=3113, avg=1715.84, stdev=830.13 00:17:40.826 clat percentiles (msec): 00:17:40.826 | 1.00th=[ 355], 5.00th=[ 827], 10.00th=[ 911], 20.00th=[ 995], 00:17:40.826 | 30.00th=[ 1062], 40.00th=[ 1200], 50.00th=[ 1334], 60.00th=[ 1485], 00:17:40.826 | 70.00th=[ 2534], 80.00th=[ 2702], 90.00th=[ 2970], 95.00th=[ 3037], 00:17:40.826 | 99.00th=[ 3071], 99.50th=[ 3071], 99.90th=[ 3104], 99.95th=[ 3104], 00:17:40.826 | 99.99th=[ 3104] 00:17:40.826 bw ( KiB/s): min=18432, max=167936, per=2.42%, avg=81641.73, stdev=41524.16, samples=15 00:17:40.826 iops : min= 18, max= 164, avg=79.67, stdev=40.62, samples=15 00:17:40.826 lat (msec) : 250=0.55%, 500=0.96%, 750=1.93%, 1000=16.80%, 2000=44.77% 00:17:40.826 lat (msec) : >=2000=34.99% 00:17:40.826 cpu : usr=0.02%, sys=2.03%, ctx=1300, majf=0, minf=32769 00:17:40.826 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:17:40.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.826 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.826 issued rwts: total=726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.827 job2: (groupid=0, jobs=1): err= 0: pid=3561739: Wed Nov 20 18:02:58 2024 00:17:40.827 read: IOPS=1, BW=1825KiB/s (1869kB/s)(25.0MiB/14029msec) 00:17:40.827 slat (msec): min=3, max=2129, avg=475.51, stdev=861.02 00:17:40.827 clat (msec): min=2140, max=14024, avg=9464.06, stdev=4053.85 00:17:40.827 lat (msec): min=4200, max=14027, avg=9939.57, stdev=3851.11 00:17:40.827 clat percentiles (msec): 00:17:40.827 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4245], 00:17:40.827 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[10671], 60.00th=[10671], 00:17:40.827 | 70.00th=[12818], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:17:40.827 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:40.827 | 99.99th=[14026] 00:17:40.827 lat (msec) : >=2000=100.00% 00:17:40.827 cpu : usr=0.00%, sys=0.14%, ctx=66, majf=0, minf=6401 00:17:40.827 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:17:40.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.827 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:40.827 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.827 job3: (groupid=0, jobs=1): err= 0: pid=3561743: Wed Nov 20 18:02:58 2024 00:17:40.827 read: IOPS=148, BW=149MiB/s (156MB/s)(1497MiB/10056msec) 00:17:40.827 slat (usec): min=28, max=85377, avg=6676.98, stdev=9109.54 00:17:40.827 clat (msec): min=50, max=1547, avg=801.13, stdev=373.76 00:17:40.827 lat (msec): min=57, max=1551, avg=807.81, stdev=376.38 00:17:40.827 clat percentiles (msec): 00:17:40.827 | 1.00th=[ 110], 5.00th=[ 405], 10.00th=[ 430], 20.00th=[ 439], 00:17:40.827 | 30.00th=[ 510], 40.00th=[ 642], 50.00th=[ 651], 60.00th=[ 860], 00:17:40.827 | 70.00th=[ 1045], 80.00th=[ 1250], 90.00th=[ 1385], 95.00th=[ 1401], 00:17:40.827 | 99.00th=[ 1519], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1552], 00:17:40.827 | 99.99th=[ 1552] 00:17:40.827 bw ( KiB/s): min=47104, max=307200, per=4.61%, avg=155369.17, stdev=74885.59, samples=18 00:17:40.827 iops : min= 46, max= 300, avg=151.56, stdev=73.18, samples=18 00:17:40.827 lat (msec) : 100=0.53%, 250=1.40%, 500=26.92%, 750=29.19%, 1000=10.69% 00:17:40.827 lat (msec) : 2000=31.26% 00:17:40.827 cpu : usr=0.10%, sys=2.78%, ctx=2131, majf=0, minf=32769 00:17:40.827 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:17:40.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.827 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.827 issued rwts: total=1497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.827 job3: (groupid=0, jobs=1): err= 0: pid=3561745: Wed Nov 20 18:02:58 2024 00:17:40.827 read: IOPS=66, BW=66.9MiB/s (70.1MB/s)(673MiB/10063msec) 00:17:40.827 slat (usec): min=28, max=2107.2k, avg=14882.22, stdev=106120.01 00:17:40.827 clat (msec): min=43, max=6013, avg=1711.32, stdev=1797.58 00:17:40.827 lat (msec): min=97, max=6014, avg=1726.20, stdev=1803.52 00:17:40.827 clat percentiles (msec): 00:17:40.827 | 1.00th=[ 275], 5.00th=[ 592], 10.00th=[ 592], 20.00th=[ 600], 00:17:40.827 | 30.00th=[ 634], 40.00th=[ 659], 50.00th=[ 718], 60.00th=[ 802], 00:17:40.827 | 70.00th=[ 1821], 80.00th=[ 2198], 90.00th=[ 5269], 95.00th=[ 5805], 00:17:40.827 | 99.00th=[ 5940], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:17:40.827 | 99.99th=[ 6007] 00:17:40.827 bw ( KiB/s): min= 8192, max=219136, per=2.76%, avg=92921.92, stdev=78897.92, samples=12 00:17:40.827 iops : min= 8, max= 214, avg=90.67, stdev=77.03, samples=12 00:17:40.827 lat (msec) : 50=0.15%, 100=0.15%, 250=0.59%, 500=1.34%, 750=50.67% 00:17:40.827 lat (msec) : 1000=12.33%, 2000=13.22%, >=2000=21.55% 00:17:40.827 cpu : usr=0.05%, sys=2.09%, ctx=1088, majf=0, minf=32769 00:17:40.827 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:17:40.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.827 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.827 issued rwts: total=673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.827 job3: (groupid=0, jobs=1): err= 0: pid=3561746: Wed Nov 20 18:02:58 2024 00:17:40.827 read: IOPS=50, BW=50.9MiB/s (53.3MB/s)(510MiB/10025msec) 00:17:40.827 slat (usec): min=28, max=2061.1k, avg=19605.52, stdev=91788.61 00:17:40.827 clat (msec): min=23, max=4540, avg=1992.06, stdev=1168.41 00:17:40.827 lat (msec): min=25, max=4573, avg=2011.67, stdev=1173.52 00:17:40.827 clat percentiles (msec): 00:17:40.827 | 1.00th=[ 39], 5.00th=[ 77], 10.00th=[ 443], 20.00th=[ 1183], 00:17:40.827 | 30.00th=[ 1720], 40.00th=[ 1804], 50.00th=[ 1854], 60.00th=[ 1905], 00:17:40.827 | 70.00th=[ 2165], 80.00th=[ 2299], 90.00th=[ 4279], 95.00th=[ 4396], 00:17:40.827 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:17:40.827 | 99.99th=[ 4530] 00:17:40.827 bw ( KiB/s): min= 2048, max=118929, per=1.93%, avg=65206.75, stdev=30031.96, samples=12 00:17:40.827 iops : min= 2, max= 116, avg=63.67, stdev=29.31, samples=12 00:17:40.827 lat (msec) : 50=2.75%, 100=3.14%, 250=1.57%, 500=3.73%, 750=4.12% 00:17:40.827 lat (msec) : 1000=2.75%, 2000=48.04%, >=2000=33.92% 00:17:40.827 cpu : usr=0.03%, sys=1.22%, ctx=2011, majf=0, minf=32769 00:17:40.827 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:17:40.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.827 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:40.827 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.827 job3: (groupid=0, jobs=1): err= 0: pid=3561747: Wed Nov 20 18:02:58 2024 00:17:40.827 read: IOPS=56, BW=56.9MiB/s (59.7MB/s)(573MiB/10065msec) 00:17:40.827 slat (usec): min=26, max=2037.6k, avg=17454.20, stdev=98065.22 00:17:40.827 clat (msec): min=60, max=4591, avg=1678.29, stdev=1139.16 00:17:40.827 lat (msec): min=67, max=4594, avg=1695.75, stdev=1145.24 00:17:40.827 clat percentiles (msec): 00:17:40.827 | 1.00th=[ 113], 5.00th=[ 351], 10.00th=[ 735], 20.00th=[ 785], 00:17:40.827 | 30.00th=[ 835], 40.00th=[ 1011], 50.00th=[ 1217], 60.00th=[ 1871], 00:17:40.827 | 70.00th=[ 2198], 80.00th=[ 2400], 90.00th=[ 2702], 95.00th=[ 4530], 00:17:40.827 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:17:40.827 | 99.99th=[ 4597] 00:17:40.827 bw ( KiB/s): min=34816, max=167936, per=2.46%, avg=82832.82, stdev=49210.95, samples=11 00:17:40.827 iops : min= 34, max= 164, avg=80.82, stdev=47.95, samples=11 00:17:40.827 lat (msec) : 100=0.87%, 250=2.97%, 500=3.49%, 750=4.54%, 1000=28.10% 00:17:40.827 lat (msec) : 2000=25.13%, >=2000=34.90% 00:17:40.827 cpu : usr=0.00%, sys=1.79%, ctx=1606, majf=0, minf=32769 00:17:40.827 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:17:40.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.827 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.827 issued rwts: total=573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.827 job3: (groupid=0, jobs=1): err= 0: pid=3561748: Wed Nov 20 18:02:58 2024 00:17:40.827 read: IOPS=12, BW=12.8MiB/s (13.4MB/s)(153MiB/11966msec) 00:17:40.827 slat (usec): min=339, max=2105.5k, avg=77786.49, stdev=355397.29 00:17:40.827 clat (msec): min=63, max=11547, avg=9145.81, stdev=3010.81 00:17:40.827 lat (msec): min=1632, max=11574, avg=9223.59, stdev=2922.38 00:17:40.827 clat percentiles (msec): 00:17:40.827 | 1.00th=[ 1620], 5.00th=[ 2123], 10.00th=[ 3272], 20.00th=[ 7416], 00:17:40.828 | 30.00th=[ 9463], 40.00th=[10402], 50.00th=[10537], 60.00th=[10805], 00:17:40.828 | 70.00th=[10805], 80.00th=[10939], 90.00th=[11208], 95.00th=[11342], 00:17:40.828 | 99.00th=[11476], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:17:40.828 | 99.99th=[11610] 00:17:40.828 bw ( KiB/s): min= 2048, max=12288, per=0.24%, avg=8183.17, stdev=4289.08, samples=6 00:17:40.828 iops : min= 2, max= 12, avg= 7.67, stdev= 4.03, samples=6 00:17:40.828 lat (msec) : 100=0.65%, 2000=3.27%, >=2000=96.08% 00:17:40.828 cpu : usr=0.00%, sys=0.65%, ctx=325, majf=0, minf=32769 00:17:40.828 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.5%, 32=20.9%, >=64=58.8% 00:17:40.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.828 complete : 0=0.0%, 4=96.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.7% 00:17:40.828 issued rwts: total=153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.828 job3: (groupid=0, jobs=1): err= 0: pid=3561749: Wed Nov 20 18:02:58 2024 00:17:40.828 read: IOPS=5, BW=6070KiB/s (6216kB/s)(72.0MiB/12146msec) 00:17:40.828 slat (usec): min=717, max=2112.2k, avg=139109.90, stdev=496953.16 00:17:40.828 clat (msec): min=2128, max=12142, avg=10244.53, stdev=3050.72 00:17:40.828 lat (msec): min=2145, max=12145, avg=10383.64, stdev=2900.08 00:17:40.828 clat percentiles (msec): 00:17:40.828 | 1.00th=[ 2123], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 8557], 00:17:40.828 | 30.00th=[10671], 40.00th=[11879], 50.00th=[12013], 60.00th=[12013], 00:17:40.828 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:17:40.828 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:17:40.828 | 99.99th=[12147] 00:17:40.828 lat (msec) : >=2000=100.00% 00:17:40.828 cpu : usr=0.01%, sys=0.91%, ctx=103, majf=0, minf=18433 00:17:40.828 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:17:40.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.828 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:40.828 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.828 job3: (groupid=0, jobs=1): err= 0: pid=3561750: Wed Nov 20 18:02:58 2024 00:17:40.828 read: IOPS=83, BW=83.9MiB/s (88.0MB/s)(841MiB/10026msec) 00:17:40.828 slat (usec): min=31, max=88797, avg=11885.95, stdev=14822.11 00:17:40.828 clat (msec): min=24, max=2863, avg=1389.35, stdev=744.03 00:17:40.828 lat (msec): min=26, max=2870, avg=1401.23, stdev=747.91 00:17:40.828 clat percentiles (msec): 00:17:40.828 | 1.00th=[ 114], 5.00th=[ 592], 10.00th=[ 600], 20.00th=[ 625], 00:17:40.828 | 30.00th=[ 667], 40.00th=[ 1020], 50.00th=[ 1284], 60.00th=[ 1586], 00:17:40.828 | 70.00th=[ 1838], 80.00th=[ 2140], 90.00th=[ 2567], 95.00th=[ 2668], 00:17:40.828 | 99.00th=[ 2802], 99.50th=[ 2836], 99.90th=[ 2869], 99.95th=[ 2869], 00:17:40.828 | 99.99th=[ 2869] 00:17:40.828 bw ( KiB/s): min=30781, max=212992, per=2.55%, avg=86075.47, stdev=61107.82, samples=17 00:17:40.828 iops : min= 30, max= 208, avg=84.00, stdev=59.64, samples=17 00:17:40.828 lat (msec) : 50=0.71%, 100=0.12%, 250=0.83%, 500=0.71%, 750=30.92% 00:17:40.828 lat (msec) : 1000=6.42%, 2000=36.86%, >=2000=23.42% 00:17:40.828 cpu : usr=0.08%, sys=1.77%, ctx=2257, majf=0, minf=32769 00:17:40.828 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:17:40.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.828 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.828 issued rwts: total=841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.828 job3: (groupid=0, jobs=1): err= 0: pid=3561751: Wed Nov 20 18:02:58 2024 00:17:40.828 read: IOPS=66, BW=66.4MiB/s (69.7MB/s)(665MiB/10010msec) 00:17:40.828 slat (usec): min=26, max=2042.0k, avg=15034.61, stdev=80140.65 00:17:40.828 clat (msec): min=9, max=4727, avg=1525.84, stdev=1234.86 00:17:40.828 lat (msec): min=9, max=4738, avg=1540.88, stdev=1243.50 00:17:40.828 clat percentiles (msec): 00:17:40.828 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 74], 20.00th=[ 634], 00:17:40.828 | 30.00th=[ 1062], 40.00th=[ 1150], 50.00th=[ 1334], 60.00th=[ 1469], 00:17:40.828 | 70.00th=[ 1552], 80.00th=[ 1888], 90.00th=[ 4530], 95.00th=[ 4597], 00:17:40.828 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:17:40.828 | 99.99th=[ 4732] 00:17:40.828 bw ( KiB/s): min=45056, max=133120, per=2.59%, avg=87231.30, stdev=29232.72, samples=10 00:17:40.828 iops : min= 44, max= 130, avg=85.10, stdev=28.61, samples=10 00:17:40.828 lat (msec) : 10=0.30%, 20=2.26%, 50=5.11%, 100=3.31%, 250=2.56% 00:17:40.828 lat (msec) : 500=3.31%, 750=5.26%, 1000=4.51%, 2000=57.29%, >=2000=16.09% 00:17:40.828 cpu : usr=0.03%, sys=1.26%, ctx=1798, majf=0, minf=32769 00:17:40.828 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:17:40.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.828 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.828 issued rwts: total=665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.828 job3: (groupid=0, jobs=1): err= 0: pid=3561752: Wed Nov 20 18:02:58 2024 00:17:40.828 read: IOPS=63, BW=63.9MiB/s (67.0MB/s)(644MiB/10073msec) 00:17:40.828 slat (usec): min=34, max=2034.7k, avg=15574.88, stdev=82619.91 00:17:40.828 clat (msec): min=38, max=4182, avg=1908.77, stdev=880.87 00:17:40.828 lat (msec): min=77, max=4190, avg=1924.34, stdev=884.45 00:17:40.828 clat percentiles (msec): 00:17:40.828 | 1.00th=[ 174], 5.00th=[ 684], 10.00th=[ 693], 20.00th=[ 751], 00:17:40.828 | 30.00th=[ 1485], 40.00th=[ 1938], 50.00th=[ 2165], 60.00th=[ 2265], 00:17:40.828 | 70.00th=[ 2400], 80.00th=[ 2534], 90.00th=[ 2970], 95.00th=[ 3071], 00:17:40.828 | 99.00th=[ 4111], 99.50th=[ 4178], 99.90th=[ 4178], 99.95th=[ 4178], 00:17:40.828 | 99.99th=[ 4178] 00:17:40.828 bw ( KiB/s): min= 4096, max=153600, per=1.96%, avg=66030.06, stdev=42816.32, samples=16 00:17:40.828 iops : min= 4, max= 150, avg=64.38, stdev=41.74, samples=16 00:17:40.828 lat (msec) : 50=0.16%, 100=0.31%, 250=0.78%, 500=2.02%, 750=16.93% 00:17:40.828 lat (msec) : 1000=6.52%, 2000=15.84%, >=2000=57.45% 00:17:40.828 cpu : usr=0.02%, sys=1.88%, ctx=1449, majf=0, minf=32769 00:17:40.828 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:17:40.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.828 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.828 issued rwts: total=644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.828 job3: (groupid=0, jobs=1): err= 0: pid=3561753: Wed Nov 20 18:02:58 2024 00:17:40.828 read: IOPS=71, BW=71.4MiB/s (74.9MB/s)(716MiB/10024msec) 00:17:40.828 slat (usec): min=26, max=2038.2k, avg=13966.10, stdev=77222.94 00:17:40.828 clat (msec): min=20, max=3628, avg=1605.88, stdev=991.55 00:17:40.828 lat (msec): min=25, max=3633, avg=1619.85, stdev=994.38 00:17:40.828 clat percentiles (msec): 00:17:40.828 | 1.00th=[ 41], 5.00th=[ 150], 10.00th=[ 439], 20.00th=[ 785], 00:17:40.828 | 30.00th=[ 844], 40.00th=[ 1150], 50.00th=[ 1620], 60.00th=[ 1754], 00:17:40.828 | 70.00th=[ 1921], 80.00th=[ 2106], 90.00th=[ 3507], 95.00th=[ 3574], 00:17:40.828 | 99.00th=[ 3608], 99.50th=[ 3608], 99.90th=[ 3641], 99.95th=[ 3641], 00:17:40.828 | 99.99th=[ 3641] 00:17:40.828 bw ( KiB/s): min=34816, max=169984, per=2.61%, avg=88025.08, stdev=47063.61, samples=12 00:17:40.828 iops : min= 34, max= 166, avg=85.83, stdev=45.89, samples=12 00:17:40.828 lat (msec) : 50=1.68%, 100=1.68%, 250=2.93%, 500=3.91%, 750=5.59% 00:17:40.828 lat (msec) : 1000=19.13%, 2000=41.34%, >=2000=23.74% 00:17:40.828 cpu : usr=0.01%, sys=1.16%, ctx=1677, majf=0, minf=32769 00:17:40.828 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:17:40.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.828 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.828 issued rwts: total=716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.828 job3: (groupid=0, jobs=1): err= 0: pid=3561755: Wed Nov 20 18:02:58 2024 00:17:40.828 read: IOPS=58, BW=58.9MiB/s (61.8MB/s)(590MiB/10010msec) 00:17:40.829 slat (usec): min=25, max=2040.5k, avg=16941.84, stdev=100723.66 00:17:40.829 clat (msec): min=9, max=4321, avg=1346.43, stdev=812.86 00:17:40.829 lat (msec): min=10, max=4405, avg=1363.37, stdev=825.58 00:17:40.829 clat percentiles (msec): 00:17:40.829 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 61], 20.00th=[ 751], 00:17:40.829 | 30.00th=[ 995], 40.00th=[ 1045], 50.00th=[ 1217], 60.00th=[ 1804], 00:17:40.829 | 70.00th=[ 1955], 80.00th=[ 2165], 90.00th=[ 2299], 95.00th=[ 2366], 00:17:40.829 | 99.00th=[ 3171], 99.50th=[ 3205], 99.90th=[ 4329], 99.95th=[ 4329], 00:17:40.829 | 99.99th=[ 4329] 00:17:40.829 bw ( KiB/s): min=22528, max=141312, per=2.13%, avg=71680.00, stdev=36724.67, samples=10 00:17:40.829 iops : min= 22, max= 138, avg=70.00, stdev=35.86, samples=10 00:17:40.829 lat (msec) : 10=0.17%, 20=2.71%, 50=6.10%, 100=5.42%, 250=4.41% 00:17:40.829 lat (msec) : 500=0.34%, 750=0.68%, 1000=12.03%, 2000=39.83%, >=2000=28.31% 00:17:40.829 cpu : usr=0.05%, sys=1.46%, ctx=1133, majf=0, minf=32769 00:17:40.829 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:17:40.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.829 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.829 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.829 job3: (groupid=0, jobs=1): err= 0: pid=3561756: Wed Nov 20 18:02:58 2024 00:17:40.829 read: IOPS=52, BW=52.9MiB/s (55.5MB/s)(531MiB/10035msec) 00:17:40.829 slat (usec): min=38, max=2044.9k, avg=18845.69, stdev=89859.90 00:17:40.829 clat (msec): min=25, max=4421, avg=2084.13, stdev=1136.69 00:17:40.829 lat (msec): min=47, max=4443, avg=2102.98, stdev=1137.95 00:17:40.829 clat percentiles (msec): 00:17:40.829 | 1.00th=[ 61], 5.00th=[ 174], 10.00th=[ 835], 20.00th=[ 1318], 00:17:40.829 | 30.00th=[ 1636], 40.00th=[ 1754], 50.00th=[ 1838], 60.00th=[ 2005], 00:17:40.829 | 70.00th=[ 2265], 80.00th=[ 2500], 90.00th=[ 4077], 95.00th=[ 4329], 00:17:40.829 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4396], 99.95th=[ 4396], 00:17:40.829 | 99.99th=[ 4396] 00:17:40.829 bw ( KiB/s): min=24576, max=122880, per=2.04%, avg=68849.33, stdev=32631.38, samples=12 00:17:40.829 iops : min= 24, max= 120, avg=67.08, stdev=31.88, samples=12 00:17:40.829 lat (msec) : 50=0.56%, 100=2.64%, 250=3.01%, 500=2.45%, 750=1.13% 00:17:40.829 lat (msec) : 1000=1.13%, 2000=46.89%, >=2000=42.18% 00:17:40.829 cpu : usr=0.00%, sys=1.64%, ctx=1614, majf=0, minf=32769 00:17:40.829 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.1% 00:17:40.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.829 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.829 issued rwts: total=531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.829 job3: (groupid=0, jobs=1): err= 0: pid=3561757: Wed Nov 20 18:02:58 2024 00:17:40.829 read: IOPS=88, BW=88.4MiB/s (92.7MB/s)(893MiB/10106msec) 00:17:40.829 slat (usec): min=22, max=1932.0k, avg=11194.46, stdev=65653.89 00:17:40.829 clat (msec): min=104, max=3736, avg=1359.41, stdev=835.49 00:17:40.829 lat (msec): min=106, max=3742, avg=1370.60, stdev=838.96 00:17:40.829 clat percentiles (msec): 00:17:40.829 | 1.00th=[ 205], 5.00th=[ 447], 10.00th=[ 726], 20.00th=[ 852], 00:17:40.829 | 30.00th=[ 1003], 40.00th=[ 1045], 50.00th=[ 1099], 60.00th=[ 1167], 00:17:40.829 | 70.00th=[ 1267], 80.00th=[ 1536], 90.00th=[ 2903], 95.00th=[ 3473], 00:17:40.829 | 99.00th=[ 3675], 99.50th=[ 3708], 99.90th=[ 3742], 99.95th=[ 3742], 00:17:40.829 | 99.99th=[ 3742] 00:17:40.829 bw ( KiB/s): min=18432, max=174080, per=3.10%, avg=104553.40, stdev=39775.86, samples=15 00:17:40.829 iops : min= 18, max= 170, avg=102.00, stdev=38.82, samples=15 00:17:40.829 lat (msec) : 250=2.69%, 500=3.36%, 750=5.60%, 1000=18.48%, 2000=55.66% 00:17:40.829 lat (msec) : >=2000=14.22% 00:17:40.829 cpu : usr=0.05%, sys=2.00%, ctx=1598, majf=0, minf=32769 00:17:40.829 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:17:40.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.829 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.829 issued rwts: total=893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.829 job4: (groupid=0, jobs=1): err= 0: pid=3561771: Wed Nov 20 18:02:58 2024 00:17:40.829 read: IOPS=100, BW=100MiB/s (105MB/s)(1013MiB/10085msec) 00:17:40.829 slat (usec): min=28, max=94250, avg=9883.83, stdev=10665.61 00:17:40.829 clat (msec): min=65, max=1808, avg=1196.84, stdev=327.50 00:17:40.829 lat (msec): min=102, max=1820, avg=1206.72, stdev=328.71 00:17:40.829 clat percentiles (msec): 00:17:40.829 | 1.00th=[ 136], 5.00th=[ 493], 10.00th=[ 944], 20.00th=[ 1045], 00:17:40.829 | 30.00th=[ 1099], 40.00th=[ 1133], 50.00th=[ 1167], 60.00th=[ 1200], 00:17:40.829 | 70.00th=[ 1318], 80.00th=[ 1452], 90.00th=[ 1620], 95.00th=[ 1737], 00:17:40.829 | 99.00th=[ 1787], 99.50th=[ 1804], 99.90th=[ 1804], 99.95th=[ 1804], 00:17:40.829 | 99.99th=[ 1804] 00:17:40.829 bw ( KiB/s): min=49152, max=136942, per=2.99%, avg=100640.33, stdev=25606.51, samples=18 00:17:40.829 iops : min= 48, max= 133, avg=98.11, stdev=24.88, samples=18 00:17:40.829 lat (msec) : 100=0.10%, 250=2.07%, 500=2.96%, 750=2.47%, 1000=5.92% 00:17:40.829 lat (msec) : 2000=86.48% 00:17:40.829 cpu : usr=0.08%, sys=2.57%, ctx=2166, majf=0, minf=32769 00:17:40.829 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:17:40.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.829 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.829 issued rwts: total=1013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.829 job4: (groupid=0, jobs=1): err= 0: pid=3561772: Wed Nov 20 18:02:58 2024 00:17:40.829 read: IOPS=100, BW=101MiB/s (106MB/s)(1214MiB/12047msec) 00:17:40.829 slat (usec): min=22, max=2099.4k, avg=9867.82, stdev=70225.22 00:17:40.830 clat (msec): min=63, max=2950, avg=1062.63, stdev=631.45 00:17:40.830 lat (msec): min=426, max=2955, avg=1072.49, stdev=634.34 00:17:40.830 clat percentiles (msec): 00:17:40.830 | 1.00th=[ 426], 5.00th=[ 430], 10.00th=[ 468], 20.00th=[ 531], 00:17:40.830 | 30.00th=[ 542], 40.00th=[ 651], 50.00th=[ 911], 60.00th=[ 1167], 00:17:40.830 | 70.00th=[ 1267], 80.00th=[ 1435], 90.00th=[ 2232], 95.00th=[ 2500], 00:17:40.830 | 99.00th=[ 2869], 99.50th=[ 2903], 99.90th=[ 2937], 99.95th=[ 2937], 00:17:40.830 | 99.99th=[ 2937] 00:17:40.830 bw ( KiB/s): min= 2048, max=307200, per=4.12%, avg=138967.12, stdev=82725.93, samples=16 00:17:40.830 iops : min= 2, max= 300, avg=135.63, stdev=80.72, samples=16 00:17:40.830 lat (msec) : 100=0.08%, 500=14.50%, 750=29.57%, 1000=8.81%, 2000=35.17% 00:17:40.830 lat (msec) : >=2000=11.86% 00:17:40.830 cpu : usr=0.02%, sys=1.27%, ctx=1972, majf=0, minf=32769 00:17:40.830 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:17:40.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.830 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.830 issued rwts: total=1214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.830 job4: (groupid=0, jobs=1): err= 0: pid=3561773: Wed Nov 20 18:02:58 2024 00:17:40.830 read: IOPS=5, BW=6121KiB/s (6268kB/s)(73.0MiB/12212msec) 00:17:40.830 slat (usec): min=700, max=2140.8k, avg=138049.57, stdev=500316.72 00:17:40.830 clat (msec): min=2133, max=12206, avg=11390.99, stdev=1967.52 00:17:40.830 lat (msec): min=4211, max=12211, avg=11529.04, stdev=1634.32 00:17:40.830 clat percentiles (msec): 00:17:40.830 | 1.00th=[ 2140], 5.00th=[ 6342], 10.00th=[ 8557], 20.00th=[12013], 00:17:40.830 | 30.00th=[12013], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:17:40.830 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:17:40.830 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:17:40.830 | 99.99th=[12147] 00:17:40.830 lat (msec) : >=2000=100.00% 00:17:40.830 cpu : usr=0.00%, sys=0.99%, ctx=121, majf=0, minf=18689 00:17:40.830 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:17:40.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.830 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:40.830 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.830 job4: (groupid=0, jobs=1): err= 0: pid=3561774: Wed Nov 20 18:02:58 2024 00:17:40.830 read: IOPS=28, BW=28.7MiB/s (30.1MB/s)(344MiB/11968msec) 00:17:40.830 slat (usec): min=663, max=2140.9k, avg=34596.48, stdev=212063.70 00:17:40.830 clat (msec): min=65, max=6902, avg=2198.75, stdev=1265.55 00:17:40.830 lat (msec): min=969, max=8568, avg=2233.35, stdev=1308.94 00:17:40.830 clat percentiles (msec): 00:17:40.830 | 1.00th=[ 978], 5.00th=[ 1053], 10.00th=[ 1099], 20.00th=[ 1217], 00:17:40.830 | 30.00th=[ 1284], 40.00th=[ 1334], 50.00th=[ 1351], 60.00th=[ 1955], 00:17:40.830 | 70.00th=[ 3339], 80.00th=[ 3574], 90.00th=[ 3842], 95.00th=[ 3977], 00:17:40.830 | 99.00th=[ 6342], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:17:40.830 | 99.99th=[ 6879] 00:17:40.830 bw ( KiB/s): min= 8692, max=122880, per=2.18%, avg=73470.00, stdev=49180.49, samples=6 00:17:40.830 iops : min= 8, max= 120, avg=71.67, stdev=48.16, samples=6 00:17:40.830 lat (msec) : 100=0.29%, 1000=1.74%, 2000=58.72%, >=2000=39.24% 00:17:40.830 cpu : usr=0.00%, sys=0.57%, ctx=717, majf=0, minf=32769 00:17:40.830 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.3%, >=64=81.7% 00:17:40.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.830 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:40.830 issued rwts: total=344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.830 job4: (groupid=0, jobs=1): err= 0: pid=3561775: Wed Nov 20 18:02:58 2024 00:17:40.830 read: IOPS=24, BW=24.1MiB/s (25.3MB/s)(296MiB/12271msec) 00:17:40.830 slat (usec): min=26, max=2134.7k, avg=34249.02, stdev=211060.61 00:17:40.830 clat (msec): min=1328, max=11880, avg=5100.78, stdev=3322.32 00:17:40.830 lat (msec): min=1339, max=11908, avg=5135.03, stdev=3319.75 00:17:40.830 clat percentiles (msec): 00:17:40.830 | 1.00th=[ 1334], 5.00th=[ 1452], 10.00th=[ 1519], 20.00th=[ 1787], 00:17:40.830 | 30.00th=[ 2500], 40.00th=[ 2802], 50.00th=[ 2970], 60.00th=[ 8087], 00:17:40.830 | 70.00th=[ 8792], 80.00th=[ 8926], 90.00th=[ 9194], 95.00th=[ 9329], 00:17:40.830 | 99.00th=[ 9463], 99.50th=[ 9597], 99.90th=[11879], 99.95th=[11879], 00:17:40.830 | 99.99th=[11879] 00:17:40.830 bw ( KiB/s): min= 1467, max=122880, per=1.14%, avg=38392.33, stdev=42577.43, samples=9 00:17:40.830 iops : min= 1, max= 120, avg=37.44, stdev=41.63, samples=9 00:17:40.830 lat (msec) : 2000=21.28%, >=2000=78.72% 00:17:40.830 cpu : usr=0.01%, sys=1.51%, ctx=678, majf=0, minf=32207 00:17:40.830 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.7% 00:17:40.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.830 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:40.830 issued rwts: total=296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.830 job4: (groupid=0, jobs=1): err= 0: pid=3561776: Wed Nov 20 18:02:58 2024 00:17:40.830 read: IOPS=39, BW=39.9MiB/s (41.8MB/s)(402MiB/10087msec) 00:17:40.830 slat (usec): min=23, max=2111.9k, avg=24886.44, stdev=159197.73 00:17:40.830 clat (msec): min=81, max=7340, avg=2200.87, stdev=2242.53 00:17:40.830 lat (msec): min=116, max=7350, avg=2225.76, stdev=2260.11 00:17:40.830 clat percentiles (msec): 00:17:40.830 | 1.00th=[ 128], 5.00th=[ 203], 10.00th=[ 288], 20.00th=[ 506], 00:17:40.830 | 30.00th=[ 1028], 40.00th=[ 1351], 50.00th=[ 1687], 60.00th=[ 1804], 00:17:40.830 | 70.00th=[ 1854], 80.00th=[ 1905], 90.00th=[ 7013], 95.00th=[ 7282], 00:17:40.830 | 99.00th=[ 7349], 99.50th=[ 7349], 99.90th=[ 7349], 99.95th=[ 7349], 00:17:40.830 | 99.99th=[ 7349] 00:17:40.830 bw ( KiB/s): min=34816, max=188039, per=2.39%, avg=80403.29, stdev=49520.75, samples=7 00:17:40.830 iops : min= 34, max= 183, avg=78.43, stdev=48.13, samples=7 00:17:40.830 lat (msec) : 100=0.25%, 250=7.71%, 500=11.94%, 750=5.47%, 1000=2.99% 00:17:40.830 lat (msec) : 2000=53.73%, >=2000=17.91% 00:17:40.830 cpu : usr=0.00%, sys=1.73%, ctx=1027, majf=0, minf=32769 00:17:40.830 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.3% 00:17:40.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.830 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:40.830 issued rwts: total=402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.830 job4: (groupid=0, jobs=1): err= 0: pid=3561777: Wed Nov 20 18:02:58 2024 00:17:40.830 read: IOPS=138, BW=139MiB/s (146MB/s)(1392MiB/10027msec) 00:17:40.830 slat (usec): min=23, max=1252.6k, avg=7182.77, stdev=36034.07 00:17:40.830 clat (msec): min=22, max=2107, avg=689.39, stdev=516.64 00:17:40.830 lat (msec): min=27, max=3100, avg=696.58, stdev=522.78 00:17:40.830 clat percentiles (msec): 00:17:40.830 | 1.00th=[ 42], 5.00th=[ 134], 10.00th=[ 190], 20.00th=[ 247], 00:17:40.830 | 30.00th=[ 313], 40.00th=[ 393], 50.00th=[ 592], 60.00th=[ 667], 00:17:40.830 | 70.00th=[ 835], 80.00th=[ 1116], 90.00th=[ 1301], 95.00th=[ 1989], 00:17:40.830 | 99.00th=[ 2072], 99.50th=[ 2106], 99.90th=[ 2106], 99.95th=[ 2106], 00:17:40.830 | 99.99th=[ 2106] 00:17:40.830 bw ( KiB/s): min=28672, max=522240, per=5.23%, avg=176236.15, stdev=140115.42, samples=13 00:17:40.830 iops : min= 28, max= 510, avg=172.08, stdev=136.80, samples=13 00:17:40.830 lat (msec) : 50=1.15%, 100=2.23%, 250=17.46%, 500=22.99%, 750=22.63% 00:17:40.830 lat (msec) : 1000=8.26%, 2000=22.27%, >=2000=3.02% 00:17:40.830 cpu : usr=0.00%, sys=1.51%, ctx=3538, majf=0, minf=32769 00:17:40.830 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:17:40.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.830 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.830 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.831 job4: (groupid=0, jobs=1): err= 0: pid=3561778: Wed Nov 20 18:02:58 2024 00:17:40.831 read: IOPS=191, BW=191MiB/s (201MB/s)(2331MiB/12175msec) 00:17:40.831 slat (usec): min=25, max=1007.8k, avg=4288.85, stdev=22044.39 00:17:40.831 clat (msec): min=83, max=2595, avg=638.48, stdev=567.02 00:17:40.831 lat (msec): min=83, max=2597, avg=642.77, stdev=568.07 00:17:40.831 clat percentiles (msec): 00:17:40.831 | 1.00th=[ 102], 5.00th=[ 169], 10.00th=[ 213], 20.00th=[ 262], 00:17:40.831 | 30.00th=[ 266], 40.00th=[ 284], 50.00th=[ 405], 60.00th=[ 567], 00:17:40.831 | 70.00th=[ 701], 80.00th=[ 961], 90.00th=[ 1469], 95.00th=[ 2198], 00:17:40.831 | 99.00th=[ 2534], 99.50th=[ 2567], 99.90th=[ 2601], 99.95th=[ 2601], 00:17:40.831 | 99.99th=[ 2601] 00:17:40.831 bw ( KiB/s): min=65536, max=559104, per=7.43%, avg=250488.67, stdev=170540.14, samples=18 00:17:40.831 iops : min= 64, max= 546, avg=244.50, stdev=166.58, samples=18 00:17:40.831 lat (msec) : 100=0.60%, 250=15.32%, 500=41.48%, 750=17.42%, 1000=5.66% 00:17:40.831 lat (msec) : 2000=14.07%, >=2000=5.45% 00:17:40.831 cpu : usr=0.07%, sys=2.29%, ctx=3965, majf=0, minf=32769 00:17:40.831 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:17:40.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.831 issued rwts: total=2331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.831 job4: (groupid=0, jobs=1): err= 0: pid=3561779: Wed Nov 20 18:02:58 2024 00:17:40.831 read: IOPS=116, BW=116MiB/s (122MB/s)(1171MiB/10081msec) 00:17:40.831 slat (usec): min=25, max=1071.6k, avg=8541.59, stdev=34467.01 00:17:40.831 clat (msec): min=73, max=2949, avg=964.08, stdev=688.97 00:17:40.831 lat (msec): min=94, max=2956, avg=972.62, stdev=693.18 00:17:40.831 clat percentiles (msec): 00:17:40.831 | 1.00th=[ 153], 5.00th=[ 300], 10.00th=[ 305], 20.00th=[ 317], 00:17:40.831 | 30.00th=[ 368], 40.00th=[ 542], 50.00th=[ 902], 60.00th=[ 1053], 00:17:40.831 | 70.00th=[ 1217], 80.00th=[ 1552], 90.00th=[ 1821], 95.00th=[ 2702], 00:17:40.831 | 99.00th=[ 2903], 99.50th=[ 2937], 99.90th=[ 2937], 99.95th=[ 2937], 00:17:40.831 | 99.99th=[ 2937] 00:17:40.831 bw ( KiB/s): min=40960, max=417792, per=4.23%, avg=142540.80, stdev=114671.80, samples=15 00:17:40.831 iops : min= 40, max= 408, avg=139.20, stdev=111.98, samples=15 00:17:40.831 lat (msec) : 100=0.26%, 250=1.54%, 500=36.21%, 750=8.20%, 1000=12.38% 00:17:40.831 lat (msec) : 2000=34.33%, >=2000=7.09% 00:17:40.831 cpu : usr=0.05%, sys=2.26%, ctx=3342, majf=0, minf=32769 00:17:40.831 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:17:40.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.831 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.831 issued rwts: total=1171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.831 job4: (groupid=0, jobs=1): err= 0: pid=3561780: Wed Nov 20 18:02:58 2024 00:17:40.831 read: IOPS=160, BW=160MiB/s (168MB/s)(1927MiB/12021msec) 00:17:40.831 slat (usec): min=23, max=2139.0k, avg=6200.33, stdev=53867.30 00:17:40.831 clat (msec): min=65, max=2476, avg=745.26, stdev=674.12 00:17:40.831 lat (msec): min=104, max=2482, avg=751.46, stdev=676.79 00:17:40.831 clat percentiles (msec): 00:17:40.831 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 131], 20.00th=[ 255], 00:17:40.831 | 30.00th=[ 334], 40.00th=[ 368], 50.00th=[ 439], 60.00th=[ 510], 00:17:40.831 | 70.00th=[ 919], 80.00th=[ 1334], 90.00th=[ 2106], 95.00th=[ 2265], 00:17:40.831 | 99.00th=[ 2433], 99.50th=[ 2467], 99.90th=[ 2467], 99.95th=[ 2467], 00:17:40.831 | 99.99th=[ 2467] 00:17:40.831 bw ( KiB/s): min=24576, max=667648, per=6.04%, avg=203487.22, stdev=183538.37, samples=18 00:17:40.831 iops : min= 24, max= 652, avg=198.56, stdev=179.24, samples=18 00:17:40.831 lat (msec) : 100=0.05%, 250=19.72%, 500=39.23%, 750=6.80%, 1000=7.01% 00:17:40.831 lat (msec) : 2000=16.50%, >=2000=10.69% 00:17:40.831 cpu : usr=0.04%, sys=1.85%, ctx=4488, majf=0, minf=32769 00:17:40.831 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:17:40.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.831 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.831 issued rwts: total=1927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.831 job4: (groupid=0, jobs=1): err= 0: pid=3561782: Wed Nov 20 18:02:58 2024 00:17:40.831 read: IOPS=77, BW=77.1MiB/s (80.9MB/s)(933MiB/12098msec) 00:17:40.831 slat (usec): min=28, max=2084.9k, avg=12889.27, stdev=124971.81 00:17:40.831 clat (msec): min=66, max=8266, avg=1580.29, stdev=2458.73 00:17:40.831 lat (msec): min=347, max=8274, avg=1593.18, stdev=2466.45 00:17:40.831 clat percentiles (msec): 00:17:40.831 | 1.00th=[ 359], 5.00th=[ 380], 10.00th=[ 405], 20.00th=[ 409], 00:17:40.831 | 30.00th=[ 418], 40.00th=[ 447], 50.00th=[ 489], 60.00th=[ 609], 00:17:40.831 | 70.00th=[ 852], 80.00th=[ 1099], 90.00th=[ 7886], 95.00th=[ 8087], 00:17:40.831 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8288], 99.95th=[ 8288], 00:17:40.831 | 99.99th=[ 8288] 00:17:40.831 bw ( KiB/s): min= 2048, max=327680, per=4.08%, avg=137474.50, stdev=119074.57, samples=12 00:17:40.831 iops : min= 2, max= 320, avg=134.25, stdev=116.29, samples=12 00:17:40.831 lat (msec) : 100=0.11%, 500=51.66%, 750=14.58%, 1000=9.00%, 2000=9.43% 00:17:40.831 lat (msec) : >=2000=15.22% 00:17:40.831 cpu : usr=0.04%, sys=1.36%, ctx=1148, majf=0, minf=32769 00:17:40.831 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.2% 00:17:40.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.831 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.831 issued rwts: total=933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.831 job4: (groupid=0, jobs=1): err= 0: pid=3561783: Wed Nov 20 18:02:58 2024 00:17:40.831 read: IOPS=97, BW=97.4MiB/s (102MB/s)(982MiB/10079msec) 00:17:40.831 slat (usec): min=44, max=93486, avg=10182.17, stdev=10826.45 00:17:40.831 clat (msec): min=72, max=1975, avg=1246.84, stdev=328.60 00:17:40.831 lat (msec): min=79, max=1984, avg=1257.02, stdev=329.45 00:17:40.831 clat percentiles (msec): 00:17:40.831 | 1.00th=[ 167], 5.00th=[ 592], 10.00th=[ 1003], 20.00th=[ 1062], 00:17:40.831 | 30.00th=[ 1133], 40.00th=[ 1200], 50.00th=[ 1250], 60.00th=[ 1301], 00:17:40.831 | 70.00th=[ 1368], 80.00th=[ 1418], 90.00th=[ 1737], 95.00th=[ 1804], 00:17:40.831 | 99.00th=[ 1921], 99.50th=[ 1955], 99.90th=[ 1972], 99.95th=[ 1972], 00:17:40.831 | 99.99th=[ 1972] 00:17:40.831 bw ( KiB/s): min=53248, max=131072, per=2.88%, avg=97225.11, stdev=23270.48, samples=18 00:17:40.831 iops : min= 52, max= 128, avg=94.72, stdev=22.72, samples=18 00:17:40.831 lat (msec) : 100=0.51%, 250=1.22%, 500=2.44%, 750=2.95%, 1000=2.75% 00:17:40.831 lat (msec) : 2000=90.12% 00:17:40.831 cpu : usr=0.14%, sys=2.60%, ctx=2179, majf=0, minf=32769 00:17:40.831 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:17:40.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.831 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.831 issued rwts: total=982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.831 job4: (groupid=0, jobs=1): err= 0: pid=3561784: Wed Nov 20 18:02:58 2024 00:17:40.831 read: IOPS=209, BW=209MiB/s (219MB/s)(2500MiB/11960msec) 00:17:40.831 slat (usec): min=24, max=2050.7k, avg=4755.02, stdev=42076.38 00:17:40.831 clat (msec): min=63, max=2645, avg=590.57, stdev=610.06 00:17:40.831 lat (msec): min=101, max=2661, avg=595.32, stdev=612.94 00:17:40.831 clat percentiles (msec): 00:17:40.831 | 1.00th=[ 102], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 138], 00:17:40.831 | 30.00th=[ 161], 40.00th=[ 199], 50.00th=[ 326], 60.00th=[ 489], 00:17:40.831 | 70.00th=[ 667], 80.00th=[ 1020], 90.00th=[ 1536], 95.00th=[ 2165], 00:17:40.831 | 99.00th=[ 2500], 99.50th=[ 2534], 99.90th=[ 2601], 99.95th=[ 2601], 00:17:40.831 | 99.99th=[ 2635] 00:17:40.831 bw ( KiB/s): min=47104, max=991232, per=7.55%, avg=254355.37, stdev=272167.73, samples=19 00:17:40.831 iops : min= 46, max= 968, avg=248.37, stdev=265.80, samples=19 00:17:40.832 lat (msec) : 100=0.04%, 250=44.72%, 500=16.96%, 750=9.44%, 1000=8.20% 00:17:40.832 lat (msec) : 2000=15.56%, >=2000=5.08% 00:17:40.832 cpu : usr=0.03%, sys=2.08%, ctx=4948, majf=0, minf=32769 00:17:40.832 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:17:40.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.832 issued rwts: total=2500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.832 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.832 job5: (groupid=0, jobs=1): err= 0: pid=3561793: Wed Nov 20 18:02:58 2024 00:17:40.832 read: IOPS=32, BW=32.0MiB/s (33.6MB/s)(387MiB/12088msec) 00:17:40.832 slat (usec): min=28, max=2149.1k, avg=25837.17, stdev=178021.57 00:17:40.832 clat (msec): min=615, max=8035, avg=3610.75, stdev=2763.50 00:17:40.832 lat (msec): min=615, max=8035, avg=3636.58, stdev=2765.02 00:17:40.832 clat percentiles (msec): 00:17:40.832 | 1.00th=[ 617], 5.00th=[ 651], 10.00th=[ 835], 20.00th=[ 1167], 00:17:40.832 | 30.00th=[ 1854], 40.00th=[ 2198], 50.00th=[ 2232], 60.00th=[ 2232], 00:17:40.832 | 70.00th=[ 6879], 80.00th=[ 7416], 90.00th=[ 7819], 95.00th=[ 7953], 00:17:40.832 | 99.00th=[ 8020], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020], 00:17:40.832 | 99.99th=[ 8020] 00:17:40.832 bw ( KiB/s): min= 2048, max=166220, per=1.58%, avg=53289.70, stdev=61752.97, samples=10 00:17:40.832 iops : min= 2, max= 162, avg=52.00, stdev=60.24, samples=10 00:17:40.832 lat (msec) : 750=7.49%, 1000=7.24%, 2000=18.60%, >=2000=66.67% 00:17:40.832 cpu : usr=0.01%, sys=1.31%, ctx=441, majf=0, minf=32769 00:17:40.832 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.3%, >=64=83.7% 00:17:40.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.832 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:40.832 issued rwts: total=387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.832 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.832 job5: (groupid=0, jobs=1): err= 0: pid=3561794: Wed Nov 20 18:02:58 2024 00:17:40.832 read: IOPS=100, BW=101MiB/s (106MB/s)(1011MiB/10029msec) 00:17:40.832 slat (usec): min=26, max=2045.0k, avg=9885.90, stdev=75105.39 00:17:40.832 clat (msec): min=26, max=4849, avg=751.92, stdev=403.23 00:17:40.832 lat (msec): min=28, max=4855, avg=761.80, stdev=422.95 00:17:40.832 clat percentiles (msec): 00:17:40.832 | 1.00th=[ 186], 5.00th=[ 510], 10.00th=[ 531], 20.00th=[ 617], 00:17:40.832 | 30.00th=[ 642], 40.00th=[ 651], 50.00th=[ 676], 60.00th=[ 701], 00:17:40.832 | 70.00th=[ 802], 80.00th=[ 869], 90.00th=[ 953], 95.00th=[ 1083], 00:17:40.832 | 99.00th=[ 1552], 99.50th=[ 3641], 99.90th=[ 4866], 99.95th=[ 4866], 00:17:40.832 | 99.99th=[ 4866] 00:17:40.832 bw ( KiB/s): min=81920, max=231424, per=5.04%, avg=169779.20, stdev=46944.94, samples=10 00:17:40.832 iops : min= 80, max= 226, avg=165.80, stdev=45.84, samples=10 00:17:40.832 lat (msec) : 50=0.30%, 100=0.10%, 250=0.99%, 500=3.17%, 750=62.51% 00:17:40.832 lat (msec) : 1000=26.01%, 2000=5.93%, >=2000=0.99% 00:17:40.832 cpu : usr=0.09%, sys=1.62%, ctx=1107, majf=0, minf=32769 00:17:40.832 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:17:40.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.832 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.832 issued rwts: total=1011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.832 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.832 job5: (groupid=0, jobs=1): err= 0: pid=3561795: Wed Nov 20 18:02:58 2024 00:17:40.832 read: IOPS=31, BW=31.5MiB/s (33.0MB/s)(375MiB/11918msec) 00:17:40.832 slat (usec): min=45, max=2056.3k, avg=26660.37, stdev=200228.21 00:17:40.832 clat (msec): min=659, max=8679, avg=3913.34, stdev=3075.29 00:17:40.832 lat (msec): min=663, max=8681, avg=3940.00, stdev=3087.71 00:17:40.832 clat percentiles (msec): 00:17:40.832 | 1.00th=[ 667], 5.00th=[ 676], 10.00th=[ 684], 20.00th=[ 709], 00:17:40.832 | 30.00th=[ 726], 40.00th=[ 2702], 50.00th=[ 2702], 60.00th=[ 2735], 00:17:40.832 | 70.00th=[ 6477], 80.00th=[ 8221], 90.00th=[ 8423], 95.00th=[ 8557], 00:17:40.832 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:17:40.832 | 99.99th=[ 8658] 00:17:40.832 bw ( KiB/s): min= 2048, max=131072, per=1.86%, avg=62743.12, stdev=51604.31, samples=8 00:17:40.832 iops : min= 2, max= 128, avg=61.12, stdev=50.48, samples=8 00:17:40.832 lat (msec) : 750=30.13%, 2000=4.00%, >=2000=65.87% 00:17:40.832 cpu : usr=0.02%, sys=1.07%, ctx=385, majf=0, minf=32769 00:17:40.832 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:17:40.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.832 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:40.832 issued rwts: total=375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.832 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.832 job5: (groupid=0, jobs=1): err= 0: pid=3561796: Wed Nov 20 18:02:58 2024 00:17:40.832 read: IOPS=2, BW=2906KiB/s (2976kB/s)(40.0MiB/14093msec) 00:17:40.832 slat (usec): min=958, max=2094.3k, avg=298625.04, stdev=650965.32 00:17:40.832 clat (msec): min=2147, max=14091, avg=8899.29, stdev=3336.69 00:17:40.832 lat (msec): min=4210, max=14092, avg=9197.91, stdev=3250.34 00:17:40.832 clat percentiles (msec): 00:17:40.832 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6409], 00:17:40.832 | 30.00th=[ 7148], 40.00th=[ 7215], 50.00th=[ 7416], 60.00th=[ 7483], 00:17:40.832 | 70.00th=[10671], 80.00th=[13892], 90.00th=[14026], 95.00th=[14026], 00:17:40.832 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:40.832 | 99.99th=[14026] 00:17:40.832 lat (msec) : >=2000=100.00% 00:17:40.832 cpu : usr=0.01%, sys=0.21%, ctx=155, majf=0, minf=10241 00:17:40.832 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:17:40.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.832 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:40.832 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.832 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.832 job5: (groupid=0, jobs=1): err= 0: pid=3561797: Wed Nov 20 18:02:58 2024 00:17:40.832 read: IOPS=72, BW=72.5MiB/s (76.1MB/s)(728MiB/10036msec) 00:17:40.832 slat (usec): min=26, max=2076.4k, avg=13746.84, stdev=139575.43 00:17:40.832 clat (msec): min=24, max=6762, avg=1478.36, stdev=2237.40 00:17:40.832 lat (msec): min=41, max=6769, avg=1492.10, stdev=2244.77 00:17:40.832 clat percentiles (msec): 00:17:40.832 | 1.00th=[ 103], 5.00th=[ 136], 10.00th=[ 165], 20.00th=[ 401], 00:17:40.832 | 30.00th=[ 405], 40.00th=[ 409], 50.00th=[ 409], 60.00th=[ 426], 00:17:40.832 | 70.00th=[ 447], 80.00th=[ 2400], 90.00th=[ 6611], 95.00th=[ 6745], 00:17:40.832 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:17:40.832 | 99.99th=[ 6745] 00:17:40.832 bw ( KiB/s): min=12288, max=313344, per=4.33%, avg=145993.14, stdev=133365.08, samples=7 00:17:40.832 iops : min= 12, max= 306, avg=142.57, stdev=130.24, samples=7 00:17:40.832 lat (msec) : 50=0.69%, 100=0.27%, 250=12.09%, 500=61.68%, 750=3.43% 00:17:40.832 lat (msec) : 2000=0.27%, >=2000=21.57% 00:17:40.832 cpu : usr=0.05%, sys=0.95%, ctx=753, majf=0, minf=32769 00:17:40.832 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:17:40.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.833 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:40.833 issued rwts: total=728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.833 job5: (groupid=0, jobs=1): err= 0: pid=3561798: Wed Nov 20 18:02:58 2024 00:17:40.833 read: IOPS=103, BW=103MiB/s (109MB/s)(1043MiB/10078msec) 00:17:40.833 slat (usec): min=32, max=1758.1k, avg=9596.62, stdev=65807.11 00:17:40.833 clat (msec): min=60, max=4394, avg=987.42, stdev=869.56 00:17:40.833 lat (msec): min=83, max=4406, avg=997.01, stdev=877.34 00:17:40.833 clat percentiles (msec): 00:17:40.833 | 1.00th=[ 106], 5.00th=[ 405], 10.00th=[ 409], 20.00th=[ 456], 00:17:40.833 | 30.00th=[ 651], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 793], 00:17:40.833 | 70.00th=[ 869], 80.00th=[ 953], 90.00th=[ 1770], 95.00th=[ 3440], 00:17:40.833 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4396], 99.95th=[ 4396], 00:17:40.833 | 99.99th=[ 4396] 00:17:40.833 bw ( KiB/s): min=45056, max=309248, per=4.63%, avg=156115.00, stdev=69078.67, samples=12 00:17:40.833 iops : min= 44, max= 302, avg=152.33, stdev=67.51, samples=12 00:17:40.833 lat (msec) : 100=0.67%, 250=2.49%, 500=19.37%, 750=23.87%, 1000=36.05% 00:17:40.833 lat (msec) : 2000=9.30%, >=2000=8.25% 00:17:40.833 cpu : usr=0.10%, sys=1.91%, ctx=1136, majf=0, minf=32769 00:17:40.833 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:17:40.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.833 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.833 issued rwts: total=1043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.833 job5: (groupid=0, jobs=1): err= 0: pid=3561799: Wed Nov 20 18:02:58 2024 00:17:40.833 read: IOPS=36, BW=37.0MiB/s (38.8MB/s)(375MiB/10147msec) 00:17:40.833 slat (usec): min=36, max=2213.4k, avg=26859.56, stdev=155823.83 00:17:40.833 clat (msec): min=71, max=5644, avg=3049.01, stdev=1149.87 00:17:40.833 lat (msec): min=732, max=5652, avg=3075.87, stdev=1139.29 00:17:40.833 clat percentiles (msec): 00:17:40.833 | 1.00th=[ 735], 5.00th=[ 751], 10.00th=[ 1938], 20.00th=[ 2039], 00:17:40.833 | 30.00th=[ 2467], 40.00th=[ 2769], 50.00th=[ 3004], 60.00th=[ 3071], 00:17:40.833 | 70.00th=[ 3272], 80.00th=[ 4396], 90.00th=[ 4799], 95.00th=[ 5134], 00:17:40.833 | 99.00th=[ 5604], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:17:40.833 | 99.99th=[ 5671] 00:17:40.833 bw ( KiB/s): min= 2043, max=116736, per=1.50%, avg=50578.10, stdev=35947.07, samples=10 00:17:40.833 iops : min= 1, max= 114, avg=49.20, stdev=35.30, samples=10 00:17:40.833 lat (msec) : 100=0.27%, 750=1.87%, 1000=3.73%, 2000=12.53%, >=2000=81.60% 00:17:40.833 cpu : usr=0.01%, sys=1.63%, ctx=564, majf=0, minf=32769 00:17:40.833 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:17:40.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.833 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:40.833 issued rwts: total=375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.833 job5: (groupid=0, jobs=1): err= 0: pid=3561800: Wed Nov 20 18:02:58 2024 00:17:40.833 read: IOPS=6, BW=6216KiB/s (6365kB/s)(74.0MiB/12190msec) 00:17:40.833 slat (usec): min=915, max=2111.8k, avg=135970.88, stdev=483901.17 00:17:40.833 clat (msec): min=2127, max=12182, avg=10818.64, stdev=2451.23 00:17:40.833 lat (msec): min=4212, max=12189, avg=10954.61, stdev=2231.78 00:17:40.833 clat percentiles (msec): 00:17:40.833 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 8356], 20.00th=[ 8557], 00:17:40.833 | 30.00th=[11879], 40.00th=[12013], 50.00th=[12013], 60.00th=[12013], 00:17:40.833 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:17:40.833 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:17:40.833 | 99.99th=[12147] 00:17:40.833 lat (msec) : >=2000=100.00% 00:17:40.833 cpu : usr=0.02%, sys=1.01%, ctx=134, majf=0, minf=18945 00:17:40.833 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:17:40.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.833 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:40.833 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.833 job5: (groupid=0, jobs=1): err= 0: pid=3561802: Wed Nov 20 18:02:58 2024 00:17:40.833 read: IOPS=18, BW=18.8MiB/s (19.7MB/s)(230MiB/12231msec) 00:17:40.833 slat (usec): min=135, max=2130.5k, avg=44226.98, stdev=222536.20 00:17:40.833 clat (msec): min=2056, max=8525, avg=6019.79, stdev=1605.89 00:17:40.833 lat (msec): min=3399, max=8534, avg=6064.01, stdev=1576.39 00:17:40.833 clat percentiles (msec): 00:17:40.833 | 1.00th=[ 3406], 5.00th=[ 4010], 10.00th=[ 4178], 20.00th=[ 4396], 00:17:40.833 | 30.00th=[ 4665], 40.00th=[ 5000], 50.00th=[ 6342], 60.00th=[ 6879], 00:17:40.833 | 70.00th=[ 7349], 80.00th=[ 7752], 90.00th=[ 8087], 95.00th=[ 8288], 00:17:40.833 | 99.00th=[ 8490], 99.50th=[ 8490], 99.90th=[ 8557], 99.95th=[ 8557], 00:17:40.833 | 99.99th=[ 8557] 00:17:40.833 bw ( KiB/s): min= 1530, max=57344, per=0.69%, avg=23380.67, stdev=23966.44, samples=9 00:17:40.833 iops : min= 1, max= 56, avg=22.78, stdev=23.46, samples=9 00:17:40.833 lat (msec) : >=2000=100.00% 00:17:40.833 cpu : usr=0.01%, sys=1.56%, ctx=418, majf=0, minf=32769 00:17:40.833 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=7.0%, 32=13.9%, >=64=72.6% 00:17:40.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.833 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:17:40.833 issued rwts: total=230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.833 job5: (groupid=0, jobs=1): err= 0: pid=3561803: Wed Nov 20 18:02:58 2024 00:17:40.833 read: IOPS=2, BW=2896KiB/s (2966kB/s)(34.0MiB/12022msec) 00:17:40.833 slat (msec): min=2, max=2141, avg=351.57, stdev=707.73 00:17:40.833 clat (msec): min=68, max=12018, avg=8753.41, stdev=3794.03 00:17:40.833 lat (msec): min=2161, max=12021, avg=9104.98, stdev=3507.88 00:17:40.833 clat percentiles (msec): 00:17:40.833 | 1.00th=[ 69], 5.00th=[ 2165], 10.00th=[ 4144], 20.00th=[ 4279], 00:17:40.833 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10805], 60.00th=[11879], 00:17:40.833 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:40.833 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:40.833 | 99.99th=[12013] 00:17:40.833 lat (msec) : 100=2.94%, >=2000=97.06% 00:17:40.833 cpu : usr=0.01%, sys=0.27%, ctx=101, majf=0, minf=8705 00:17:40.833 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:17:40.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.833 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:40.833 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.833 job5: (groupid=0, jobs=1): err= 0: pid=3561804: Wed Nov 20 18:02:58 2024 00:17:40.833 read: IOPS=31, BW=31.6MiB/s (33.1MB/s)(318MiB/10075msec) 00:17:40.833 slat (usec): min=34, max=2598.9k, avg=31560.61, stdev=213571.23 00:17:40.833 clat (msec): min=36, max=5685, avg=2680.31, stdev=2035.74 00:17:40.833 lat (msec): min=86, max=5722, avg=2711.87, stdev=2034.14 00:17:40.833 clat percentiles (msec): 00:17:40.833 | 1.00th=[ 93], 5.00th=[ 567], 10.00th=[ 625], 20.00th=[ 684], 00:17:40.833 | 30.00th=[ 751], 40.00th=[ 827], 50.00th=[ 2005], 60.00th=[ 3775], 00:17:40.833 | 70.00th=[ 4010], 80.00th=[ 5470], 90.00th=[ 5604], 95.00th=[ 5671], 00:17:40.833 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:17:40.833 | 99.99th=[ 5671] 00:17:40.833 bw ( KiB/s): min= 4096, max=192512, per=1.92%, avg=64825.33, stdev=68562.49, samples=6 00:17:40.833 iops : min= 4, max= 188, avg=63.17, stdev=66.91, samples=6 00:17:40.833 lat (msec) : 50=0.31%, 100=1.26%, 250=1.26%, 750=26.42%, 1000=12.58% 00:17:40.833 lat (msec) : 2000=7.23%, >=2000=50.94% 00:17:40.833 cpu : usr=0.03%, sys=0.98%, ctx=445, majf=0, minf=32769 00:17:40.833 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.0%, 32=10.1%, >=64=80.2% 00:17:40.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.833 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:40.833 issued rwts: total=318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.833 job5: (groupid=0, jobs=1): err= 0: pid=3561805: Wed Nov 20 18:02:58 2024 00:17:40.833 read: IOPS=7, BW=8078KiB/s (8272kB/s)(95.0MiB/12043msec) 00:17:40.833 slat (usec): min=689, max=2108.2k, avg=126033.51, stdev=422181.81 00:17:40.834 clat (msec): min=68, max=12033, avg=5968.21, stdev=3296.04 00:17:40.834 lat (msec): min=2118, max=12042, avg=6094.25, stdev=3296.98 00:17:40.834 clat percentiles (msec): 00:17:40.834 | 1.00th=[ 69], 5.00th=[ 2165], 10.00th=[ 3708], 20.00th=[ 3775], 00:17:40.834 | 30.00th=[ 3943], 40.00th=[ 4044], 50.00th=[ 4144], 60.00th=[ 5067], 00:17:40.834 | 70.00th=[ 5403], 80.00th=[ 8658], 90.00th=[12013], 95.00th=[12013], 00:17:40.834 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:40.834 | 99.99th=[12013] 00:17:40.834 lat (msec) : 100=1.05%, >=2000=98.95% 00:17:40.834 cpu : usr=0.02%, sys=0.38%, ctx=245, majf=0, minf=24321 00:17:40.834 IO depths : 1=1.1%, 2=2.1%, 4=4.2%, 8=8.4%, 16=16.8%, 32=33.7%, >=64=33.7% 00:17:40.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.834 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:40.834 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.834 job5: (groupid=0, jobs=1): err= 0: pid=3561806: Wed Nov 20 18:02:58 2024 00:17:40.834 read: IOPS=1, BW=1111KiB/s (1138kB/s)(13.0MiB/11982msec) 00:17:40.834 slat (msec): min=7, max=4224, avg=916.35, stdev=1333.46 00:17:40.834 clat (msec): min=68, max=11874, avg=7112.98, stdev=4112.42 00:17:40.834 lat (msec): min=2124, max=11981, avg=8029.33, stdev=3720.47 00:17:40.834 clat percentiles (msec): 00:17:40.834 | 1.00th=[ 69], 5.00th=[ 69], 10.00th=[ 2123], 20.00th=[ 2198], 00:17:40.834 | 30.00th=[ 2198], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[ 8658], 00:17:40.834 | 70.00th=[10671], 80.00th=[10671], 90.00th=[11879], 95.00th=[11879], 00:17:40.834 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:17:40.834 | 99.99th=[11879] 00:17:40.834 lat (msec) : 100=7.69%, >=2000=92.31% 00:17:40.834 cpu : usr=0.00%, sys=0.08%, ctx=86, majf=0, minf=3329 00:17:40.834 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:40.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.834 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.834 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.834 00:17:40.834 Run status group 0 (all jobs): 00:17:40.834 READ: bw=3291MiB/s (3451MB/s), 508KiB/s-209MiB/s (520kB/s-219MB/s), io=46.3GiB (49.7GB), run=10010-14414msec 00:17:40.834 00:17:40.834 Disk stats (read/write): 00:17:40.834 nvme0n1: ios=39339/0, merge=0/0, ticks=5271667/0, in_queue=5271667, util=98.54% 00:17:40.834 nvme1n1: ios=58789/0, merge=0/0, ticks=6783999/0, in_queue=6783999, util=98.87% 00:17:40.834 nvme2n1: ios=59923/0, merge=0/0, ticks=8274006/0, in_queue=8274006, util=98.86% 00:17:40.834 nvme3n1: ios=66734/0, merge=0/0, ticks=7458989/0, in_queue=7458989, util=98.20% 00:17:40.834 nvme4n1: ios=116349/0, merge=0/0, ticks=6581881/0, in_queue=6581881, util=99.11% 00:17:40.834 nvme5n1: ios=37780/0, merge=0/0, ticks=6884002/0, in_queue=6884002, util=99.04% 00:17:41.094 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:17:41.094 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:17:41.094 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:41.094 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:17:42.483 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:42.483 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.869 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:17:43.869 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:43.869 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:43.869 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:17:43.869 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:43.869 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:17:43.869 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:43.869 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.870 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.870 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:43.870 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.870 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:43.870 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:17:45.257 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:45.257 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:17:46.646 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:46.646 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:17:48.033 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:48.033 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:17:49.428 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.428 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:49.428 rmmod nvme_rdma 00:17:49.429 rmmod nvme_fabrics 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 3559263 ']' 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 3559263 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 3559263 ']' 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 3559263 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3559263 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3559263' 00:17:49.429 killing process with pid 3559263 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 3559263 00:17:49.429 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 3559263 00:17:49.690 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:49.690 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:49.690 00:17:49.691 real 0m43.079s 00:17:49.691 user 2m34.676s 00:17:49.691 sys 0m20.144s 00:17:49.691 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.691 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:49.691 ************************************ 00:17:49.691 END TEST nvmf_srq_overwhelm 00:17:49.691 ************************************ 00:17:49.691 18:03:08 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:17:49.691 18:03:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:49.691 18:03:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.691 18:03:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.691 ************************************ 00:17:49.691 START TEST nvmf_shutdown 00:17:49.691 ************************************ 00:17:49.691 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:17:49.691 * Looking for test storage... 00:17:49.691 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:49.691 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:49.691 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:17:49.691 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:49.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.953 --rc genhtml_branch_coverage=1 00:17:49.953 --rc genhtml_function_coverage=1 00:17:49.953 --rc genhtml_legend=1 00:17:49.953 --rc geninfo_all_blocks=1 00:17:49.953 --rc geninfo_unexecuted_blocks=1 00:17:49.953 00:17:49.953 ' 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:49.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.953 --rc genhtml_branch_coverage=1 00:17:49.953 --rc genhtml_function_coverage=1 00:17:49.953 --rc genhtml_legend=1 00:17:49.953 --rc geninfo_all_blocks=1 00:17:49.953 --rc geninfo_unexecuted_blocks=1 00:17:49.953 00:17:49.953 ' 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:49.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.953 --rc genhtml_branch_coverage=1 00:17:49.953 --rc genhtml_function_coverage=1 00:17:49.953 --rc genhtml_legend=1 00:17:49.953 --rc geninfo_all_blocks=1 00:17:49.953 --rc geninfo_unexecuted_blocks=1 00:17:49.953 00:17:49.953 ' 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:49.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.953 --rc genhtml_branch_coverage=1 00:17:49.953 --rc genhtml_function_coverage=1 00:17:49.953 --rc genhtml_legend=1 00:17:49.953 --rc geninfo_all_blocks=1 00:17:49.953 --rc geninfo_unexecuted_blocks=1 00:17:49.953 00:17:49.953 ' 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.953 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:49.954 ************************************ 00:17:49.954 START TEST nvmf_shutdown_tc1 00:17:49.954 ************************************ 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:49.954 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:17:58.096 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:17:58.096 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:17:58.096 Found net devices under 0000:98:00.0: mlx_0_0 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:17:58.096 Found net devices under 0000:98:00.1: mlx_0_1 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:58.096 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:58.358 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:58.358 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:17:58.358 altname enp152s0f0np0 00:17:58.358 altname ens817f0np0 00:17:58.358 inet 192.168.100.8/24 scope global mlx_0_0 00:17:58.358 valid_lft forever preferred_lft forever 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:58.358 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:58.358 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:17:58.358 altname enp152s0f1np1 00:17:58.358 altname ens817f1np1 00:17:58.358 inet 192.168.100.9/24 scope global mlx_0_1 00:17:58.358 valid_lft forever preferred_lft forever 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:58.358 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:58.359 192.168.100.9' 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:58.359 192.168.100.9' 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:58.359 192.168.100.9' 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3571026 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3571026 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3571026 ']' 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.359 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:58.359 [2024-11-20 18:03:16.848903] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:58.359 [2024-11-20 18:03:16.848981] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.620 [2024-11-20 18:03:16.950550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.620 [2024-11-20 18:03:17.002328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.620 [2024-11-20 18:03:17.002379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.620 [2024-11-20 18:03:17.002389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.620 [2024-11-20 18:03:17.002396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.620 [2024-11-20 18:03:17.002403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.620 [2024-11-20 18:03:17.004459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.620 [2024-11-20 18:03:17.004624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.620 [2024-11-20 18:03:17.004755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:58.620 [2024-11-20 18:03:17.004756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.191 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.192 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:17:59.192 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.192 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.192 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:59.192 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.192 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:59.192 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.192 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:59.452 [2024-11-20 18:03:17.737759] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5a2190/0x5a6680) succeed. 00:17:59.452 [2024-11-20 18:03:17.752079] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5a3820/0x5e7d20) succeed. 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.452 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:59.452 Malloc1 00:17:59.713 [2024-11-20 18:03:17.988537] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:59.713 Malloc2 00:17:59.713 Malloc3 00:17:59.713 Malloc4 00:17:59.713 Malloc5 00:17:59.713 Malloc6 00:17:59.713 Malloc7 00:17:59.975 Malloc8 00:17:59.975 Malloc9 00:17:59.975 Malloc10 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3571354 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3571354 /var/tmp/bdevperf.sock 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3571354 ']' 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.975 { 00:17:59.975 "params": { 00:17:59.975 "name": "Nvme$subsystem", 00:17:59.975 "trtype": "$TEST_TRANSPORT", 00:17:59.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.975 "adrfam": "ipv4", 00:17:59.975 "trsvcid": "$NVMF_PORT", 00:17:59.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.975 "hdgst": ${hdgst:-false}, 00:17:59.975 "ddgst": ${ddgst:-false} 00:17:59.975 }, 00:17:59.975 "method": "bdev_nvme_attach_controller" 00:17:59.975 } 00:17:59.975 EOF 00:17:59.975 )") 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.975 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.975 { 00:17:59.975 "params": { 00:17:59.975 "name": "Nvme$subsystem", 00:17:59.975 "trtype": "$TEST_TRANSPORT", 00:17:59.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.975 "adrfam": "ipv4", 00:17:59.975 "trsvcid": "$NVMF_PORT", 00:17:59.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.975 "hdgst": ${hdgst:-false}, 00:17:59.975 "ddgst": ${ddgst:-false} 00:17:59.975 }, 00:17:59.975 "method": "bdev_nvme_attach_controller" 00:17:59.976 } 00:17:59.976 EOF 00:17:59.976 )") 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.976 { 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme$subsystem", 00:17:59.976 "trtype": "$TEST_TRANSPORT", 00:17:59.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "$NVMF_PORT", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.976 "hdgst": ${hdgst:-false}, 00:17:59.976 "ddgst": ${ddgst:-false} 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 } 00:17:59.976 EOF 00:17:59.976 )") 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.976 { 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme$subsystem", 00:17:59.976 "trtype": "$TEST_TRANSPORT", 00:17:59.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "$NVMF_PORT", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.976 "hdgst": ${hdgst:-false}, 00:17:59.976 "ddgst": ${ddgst:-false} 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 } 00:17:59.976 EOF 00:17:59.976 )") 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.976 { 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme$subsystem", 00:17:59.976 "trtype": "$TEST_TRANSPORT", 00:17:59.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "$NVMF_PORT", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.976 "hdgst": ${hdgst:-false}, 00:17:59.976 "ddgst": ${ddgst:-false} 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 } 00:17:59.976 EOF 00:17:59.976 )") 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.976 { 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme$subsystem", 00:17:59.976 "trtype": "$TEST_TRANSPORT", 00:17:59.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "$NVMF_PORT", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.976 "hdgst": ${hdgst:-false}, 00:17:59.976 "ddgst": ${ddgst:-false} 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 } 00:17:59.976 EOF 00:17:59.976 )") 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:59.976 [2024-11-20 18:03:18.453441] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:59.976 [2024-11-20 18:03:18.453495] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.976 { 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme$subsystem", 00:17:59.976 "trtype": "$TEST_TRANSPORT", 00:17:59.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "$NVMF_PORT", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.976 "hdgst": ${hdgst:-false}, 00:17:59.976 "ddgst": ${ddgst:-false} 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 } 00:17:59.976 EOF 00:17:59.976 )") 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.976 { 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme$subsystem", 00:17:59.976 "trtype": "$TEST_TRANSPORT", 00:17:59.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "$NVMF_PORT", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.976 "hdgst": ${hdgst:-false}, 00:17:59.976 "ddgst": ${ddgst:-false} 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 } 00:17:59.976 EOF 00:17:59.976 )") 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.976 { 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme$subsystem", 00:17:59.976 "trtype": "$TEST_TRANSPORT", 00:17:59.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "$NVMF_PORT", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.976 "hdgst": ${hdgst:-false}, 00:17:59.976 "ddgst": ${ddgst:-false} 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 } 00:17:59.976 EOF 00:17:59.976 )") 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:59.976 { 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme$subsystem", 00:17:59.976 "trtype": "$TEST_TRANSPORT", 00:17:59.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "$NVMF_PORT", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.976 "hdgst": ${hdgst:-false}, 00:17:59.976 "ddgst": ${ddgst:-false} 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 } 00:17:59.976 EOF 00:17:59.976 )") 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:17:59.976 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme1", 00:17:59.976 "trtype": "rdma", 00:17:59.976 "traddr": "192.168.100.8", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "4420", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.976 "hdgst": false, 00:17:59.976 "ddgst": false 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 },{ 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme2", 00:17:59.976 "trtype": "rdma", 00:17:59.976 "traddr": "192.168.100.8", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "4420", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:59.976 "hdgst": false, 00:17:59.976 "ddgst": false 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 },{ 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme3", 00:17:59.976 "trtype": "rdma", 00:17:59.976 "traddr": "192.168.100.8", 00:17:59.976 "adrfam": "ipv4", 00:17:59.976 "trsvcid": "4420", 00:17:59.976 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:59.976 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:59.976 "hdgst": false, 00:17:59.976 "ddgst": false 00:17:59.976 }, 00:17:59.976 "method": "bdev_nvme_attach_controller" 00:17:59.976 },{ 00:17:59.976 "params": { 00:17:59.976 "name": "Nvme4", 00:17:59.976 "trtype": "rdma", 00:17:59.976 "traddr": "192.168.100.8", 00:17:59.976 "adrfam": "ipv4", 00:17:59.977 "trsvcid": "4420", 00:17:59.977 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:59.977 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:59.977 "hdgst": false, 00:17:59.977 "ddgst": false 00:17:59.977 }, 00:17:59.977 "method": "bdev_nvme_attach_controller" 00:17:59.977 },{ 00:17:59.977 "params": { 00:17:59.977 "name": "Nvme5", 00:17:59.977 "trtype": "rdma", 00:17:59.977 "traddr": "192.168.100.8", 00:17:59.977 "adrfam": "ipv4", 00:17:59.977 "trsvcid": "4420", 00:17:59.977 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:59.977 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:59.977 "hdgst": false, 00:17:59.977 "ddgst": false 00:17:59.977 }, 00:17:59.977 "method": "bdev_nvme_attach_controller" 00:17:59.977 },{ 00:17:59.977 "params": { 00:17:59.977 "name": "Nvme6", 00:17:59.977 "trtype": "rdma", 00:17:59.977 "traddr": "192.168.100.8", 00:17:59.977 "adrfam": "ipv4", 00:17:59.977 "trsvcid": "4420", 00:17:59.977 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:59.977 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:59.977 "hdgst": false, 00:17:59.977 "ddgst": false 00:17:59.977 }, 00:17:59.977 "method": "bdev_nvme_attach_controller" 00:17:59.977 },{ 00:17:59.977 "params": { 00:17:59.977 "name": "Nvme7", 00:17:59.977 "trtype": "rdma", 00:17:59.977 "traddr": "192.168.100.8", 00:17:59.977 "adrfam": "ipv4", 00:17:59.977 "trsvcid": "4420", 00:17:59.977 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:59.977 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:59.977 "hdgst": false, 00:17:59.977 "ddgst": false 00:17:59.977 }, 00:17:59.977 "method": "bdev_nvme_attach_controller" 00:17:59.977 },{ 00:17:59.977 "params": { 00:17:59.977 "name": "Nvme8", 00:17:59.977 "trtype": "rdma", 00:17:59.977 "traddr": "192.168.100.8", 00:17:59.977 "adrfam": "ipv4", 00:17:59.977 "trsvcid": "4420", 00:17:59.977 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:59.977 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:59.977 "hdgst": false, 00:17:59.977 "ddgst": false 00:17:59.977 }, 00:17:59.977 "method": "bdev_nvme_attach_controller" 00:17:59.977 },{ 00:17:59.977 "params": { 00:17:59.977 "name": "Nvme9", 00:17:59.977 "trtype": "rdma", 00:17:59.977 "traddr": "192.168.100.8", 00:17:59.977 "adrfam": "ipv4", 00:17:59.977 "trsvcid": "4420", 00:17:59.977 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:59.977 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:59.977 "hdgst": false, 00:17:59.977 "ddgst": false 00:17:59.977 }, 00:17:59.977 "method": "bdev_nvme_attach_controller" 00:17:59.977 },{ 00:17:59.977 "params": { 00:17:59.977 "name": "Nvme10", 00:17:59.977 "trtype": "rdma", 00:17:59.977 "traddr": "192.168.100.8", 00:17:59.977 "adrfam": "ipv4", 00:17:59.977 "trsvcid": "4420", 00:17:59.977 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:59.977 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:59.977 "hdgst": false, 00:17:59.977 "ddgst": false 00:17:59.977 }, 00:17:59.977 "method": "bdev_nvme_attach_controller" 00:17:59.977 }' 00:18:00.237 [2024-11-20 18:03:18.533099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.237 [2024-11-20 18:03:18.569647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.179 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.179 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:18:01.179 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:01.179 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.179 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:01.179 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.179 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3571354 00:18:01.179 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:18:01.179 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:18:02.125 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3571354 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3571026 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.125 { 00:18:02.125 "params": { 00:18:02.125 "name": "Nvme$subsystem", 00:18:02.125 "trtype": "$TEST_TRANSPORT", 00:18:02.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.125 "adrfam": "ipv4", 00:18:02.125 "trsvcid": "$NVMF_PORT", 00:18:02.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.125 "hdgst": ${hdgst:-false}, 00:18:02.125 "ddgst": ${ddgst:-false} 00:18:02.125 }, 00:18:02.125 "method": "bdev_nvme_attach_controller" 00:18:02.125 } 00:18:02.125 EOF 00:18:02.125 )") 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.125 { 00:18:02.125 "params": { 00:18:02.125 "name": "Nvme$subsystem", 00:18:02.125 "trtype": "$TEST_TRANSPORT", 00:18:02.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.125 "adrfam": "ipv4", 00:18:02.125 "trsvcid": "$NVMF_PORT", 00:18:02.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.125 "hdgst": ${hdgst:-false}, 00:18:02.125 "ddgst": ${ddgst:-false} 00:18:02.125 }, 00:18:02.125 "method": "bdev_nvme_attach_controller" 00:18:02.125 } 00:18:02.125 EOF 00:18:02.125 )") 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.125 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.125 { 00:18:02.125 "params": { 00:18:02.125 "name": "Nvme$subsystem", 00:18:02.125 "trtype": "$TEST_TRANSPORT", 00:18:02.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.125 "adrfam": "ipv4", 00:18:02.125 "trsvcid": "$NVMF_PORT", 00:18:02.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.125 "hdgst": ${hdgst:-false}, 00:18:02.125 "ddgst": ${ddgst:-false} 00:18:02.126 }, 00:18:02.126 "method": "bdev_nvme_attach_controller" 00:18:02.126 } 00:18:02.126 EOF 00:18:02.126 )") 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.126 { 00:18:02.126 "params": { 00:18:02.126 "name": "Nvme$subsystem", 00:18:02.126 "trtype": "$TEST_TRANSPORT", 00:18:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.126 "adrfam": "ipv4", 00:18:02.126 "trsvcid": "$NVMF_PORT", 00:18:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.126 "hdgst": ${hdgst:-false}, 00:18:02.126 "ddgst": ${ddgst:-false} 00:18:02.126 }, 00:18:02.126 "method": "bdev_nvme_attach_controller" 00:18:02.126 } 00:18:02.126 EOF 00:18:02.126 )") 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.126 { 00:18:02.126 "params": { 00:18:02.126 "name": "Nvme$subsystem", 00:18:02.126 "trtype": "$TEST_TRANSPORT", 00:18:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.126 "adrfam": "ipv4", 00:18:02.126 "trsvcid": "$NVMF_PORT", 00:18:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.126 "hdgst": ${hdgst:-false}, 00:18:02.126 "ddgst": ${ddgst:-false} 00:18:02.126 }, 00:18:02.126 "method": "bdev_nvme_attach_controller" 00:18:02.126 } 00:18:02.126 EOF 00:18:02.126 )") 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.126 { 00:18:02.126 "params": { 00:18:02.126 "name": "Nvme$subsystem", 00:18:02.126 "trtype": "$TEST_TRANSPORT", 00:18:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.126 "adrfam": "ipv4", 00:18:02.126 "trsvcid": "$NVMF_PORT", 00:18:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.126 "hdgst": ${hdgst:-false}, 00:18:02.126 "ddgst": ${ddgst:-false} 00:18:02.126 }, 00:18:02.126 "method": "bdev_nvme_attach_controller" 00:18:02.126 } 00:18:02.126 EOF 00:18:02.126 )") 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:02.126 [2024-11-20 18:03:20.506233] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:18:02.126 [2024-11-20 18:03:20.506287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3571869 ] 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.126 { 00:18:02.126 "params": { 00:18:02.126 "name": "Nvme$subsystem", 00:18:02.126 "trtype": "$TEST_TRANSPORT", 00:18:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.126 "adrfam": "ipv4", 00:18:02.126 "trsvcid": "$NVMF_PORT", 00:18:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.126 "hdgst": ${hdgst:-false}, 00:18:02.126 "ddgst": ${ddgst:-false} 00:18:02.126 }, 00:18:02.126 "method": "bdev_nvme_attach_controller" 00:18:02.126 } 00:18:02.126 EOF 00:18:02.126 )") 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.126 { 00:18:02.126 "params": { 00:18:02.126 "name": "Nvme$subsystem", 00:18:02.126 "trtype": "$TEST_TRANSPORT", 00:18:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.126 "adrfam": "ipv4", 00:18:02.126 "trsvcid": "$NVMF_PORT", 00:18:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.126 "hdgst": ${hdgst:-false}, 00:18:02.126 "ddgst": ${ddgst:-false} 00:18:02.126 }, 00:18:02.126 "method": "bdev_nvme_attach_controller" 00:18:02.126 } 00:18:02.126 EOF 00:18:02.126 )") 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.126 { 00:18:02.126 "params": { 00:18:02.126 "name": "Nvme$subsystem", 00:18:02.126 "trtype": "$TEST_TRANSPORT", 00:18:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.126 "adrfam": "ipv4", 00:18:02.126 "trsvcid": "$NVMF_PORT", 00:18:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.126 "hdgst": ${hdgst:-false}, 00:18:02.126 "ddgst": ${ddgst:-false} 00:18:02.126 }, 00:18:02.126 "method": "bdev_nvme_attach_controller" 00:18:02.126 } 00:18:02.126 EOF 00:18:02.126 )") 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.126 { 00:18:02.126 "params": { 00:18:02.126 "name": "Nvme$subsystem", 00:18:02.126 "trtype": "$TEST_TRANSPORT", 00:18:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.126 "adrfam": "ipv4", 00:18:02.126 "trsvcid": "$NVMF_PORT", 00:18:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.126 "hdgst": ${hdgst:-false}, 00:18:02.126 "ddgst": ${ddgst:-false} 00:18:02.126 }, 00:18:02.126 "method": "bdev_nvme_attach_controller" 00:18:02.126 } 00:18:02.126 EOF 00:18:02.126 )") 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:18:02.126 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:02.126 "params": { 00:18:02.126 "name": "Nvme1", 00:18:02.126 "trtype": "rdma", 00:18:02.126 "traddr": "192.168.100.8", 00:18:02.126 "adrfam": "ipv4", 00:18:02.126 "trsvcid": "4420", 00:18:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.126 "hdgst": false, 00:18:02.126 "ddgst": false 00:18:02.126 }, 00:18:02.126 "method": "bdev_nvme_attach_controller" 00:18:02.126 },{ 00:18:02.126 "params": { 00:18:02.126 "name": "Nvme2", 00:18:02.126 "trtype": "rdma", 00:18:02.126 "traddr": "192.168.100.8", 00:18:02.126 "adrfam": "ipv4", 00:18:02.126 "trsvcid": "4420", 00:18:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:02.127 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:02.127 "hdgst": false, 00:18:02.127 "ddgst": false 00:18:02.127 }, 00:18:02.127 "method": "bdev_nvme_attach_controller" 00:18:02.127 },{ 00:18:02.127 "params": { 00:18:02.127 "name": "Nvme3", 00:18:02.127 "trtype": "rdma", 00:18:02.127 "traddr": "192.168.100.8", 00:18:02.127 "adrfam": "ipv4", 00:18:02.127 "trsvcid": "4420", 00:18:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:02.127 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:02.127 "hdgst": false, 00:18:02.127 "ddgst": false 00:18:02.127 }, 00:18:02.127 "method": "bdev_nvme_attach_controller" 00:18:02.127 },{ 00:18:02.127 "params": { 00:18:02.127 "name": "Nvme4", 00:18:02.127 "trtype": "rdma", 00:18:02.127 "traddr": "192.168.100.8", 00:18:02.127 "adrfam": "ipv4", 00:18:02.127 "trsvcid": "4420", 00:18:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:02.127 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:02.127 "hdgst": false, 00:18:02.127 "ddgst": false 00:18:02.127 }, 00:18:02.127 "method": "bdev_nvme_attach_controller" 00:18:02.127 },{ 00:18:02.127 "params": { 00:18:02.127 "name": "Nvme5", 00:18:02.127 "trtype": "rdma", 00:18:02.127 "traddr": "192.168.100.8", 00:18:02.127 "adrfam": "ipv4", 00:18:02.127 "trsvcid": "4420", 00:18:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:02.127 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:02.127 "hdgst": false, 00:18:02.127 "ddgst": false 00:18:02.127 }, 00:18:02.127 "method": "bdev_nvme_attach_controller" 00:18:02.127 },{ 00:18:02.127 "params": { 00:18:02.127 "name": "Nvme6", 00:18:02.127 "trtype": "rdma", 00:18:02.127 "traddr": "192.168.100.8", 00:18:02.127 "adrfam": "ipv4", 00:18:02.127 "trsvcid": "4420", 00:18:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:02.127 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:02.127 "hdgst": false, 00:18:02.127 "ddgst": false 00:18:02.127 }, 00:18:02.127 "method": "bdev_nvme_attach_controller" 00:18:02.127 },{ 00:18:02.127 "params": { 00:18:02.127 "name": "Nvme7", 00:18:02.127 "trtype": "rdma", 00:18:02.127 "traddr": "192.168.100.8", 00:18:02.127 "adrfam": "ipv4", 00:18:02.127 "trsvcid": "4420", 00:18:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:02.127 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:02.127 "hdgst": false, 00:18:02.127 "ddgst": false 00:18:02.127 }, 00:18:02.127 "method": "bdev_nvme_attach_controller" 00:18:02.127 },{ 00:18:02.127 "params": { 00:18:02.127 "name": "Nvme8", 00:18:02.127 "trtype": "rdma", 00:18:02.127 "traddr": "192.168.100.8", 00:18:02.127 "adrfam": "ipv4", 00:18:02.127 "trsvcid": "4420", 00:18:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:02.127 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:02.127 "hdgst": false, 00:18:02.127 "ddgst": false 00:18:02.127 }, 00:18:02.127 "method": "bdev_nvme_attach_controller" 00:18:02.127 },{ 00:18:02.127 "params": { 00:18:02.127 "name": "Nvme9", 00:18:02.127 "trtype": "rdma", 00:18:02.127 "traddr": "192.168.100.8", 00:18:02.127 "adrfam": "ipv4", 00:18:02.127 "trsvcid": "4420", 00:18:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:02.127 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:02.127 "hdgst": false, 00:18:02.127 "ddgst": false 00:18:02.127 }, 00:18:02.127 "method": "bdev_nvme_attach_controller" 00:18:02.127 },{ 00:18:02.127 "params": { 00:18:02.127 "name": "Nvme10", 00:18:02.127 "trtype": "rdma", 00:18:02.127 "traddr": "192.168.100.8", 00:18:02.127 "adrfam": "ipv4", 00:18:02.127 "trsvcid": "4420", 00:18:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:02.127 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:02.127 "hdgst": false, 00:18:02.127 "ddgst": false 00:18:02.127 }, 00:18:02.127 "method": "bdev_nvme_attach_controller" 00:18:02.127 }' 00:18:02.127 [2024-11-20 18:03:20.586104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.127 [2024-11-20 18:03:20.621951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.071 Running I/O for 1 seconds... 00:18:04.459 2453.00 IOPS, 153.31 MiB/s 00:18:04.460 Latency(us) 00:18:04.460 [2024-11-20T17:03:22.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.460 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:04.460 Verification LBA range: start 0x0 length 0x400 00:18:04.460 Nvme1n1 : 1.22 280.96 17.56 0.00 0.00 217180.40 16820.91 223696.21 00:18:04.460 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:04.460 Verification LBA range: start 0x0 length 0x400 00:18:04.460 Nvme2n1 : 1.24 309.76 19.36 0.00 0.00 200468.41 13271.04 207093.76 00:18:04.460 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:04.460 Verification LBA range: start 0x0 length 0x400 00:18:04.460 Nvme3n1 : 1.24 311.78 19.49 0.00 0.00 195371.42 3372.37 196608.00 00:18:04.460 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:04.460 Verification LBA range: start 0x0 length 0x400 00:18:04.460 Nvme4n1 : 1.24 317.15 19.82 0.00 0.00 187944.44 4259.84 159034.03 00:18:04.460 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:04.460 Verification LBA range: start 0x0 length 0x400 00:18:04.460 Nvme5n1 : 1.24 315.24 19.70 0.00 0.00 186237.91 5707.09 176510.29 00:18:04.460 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:04.460 Verification LBA range: start 0x0 length 0x400 00:18:04.460 Nvme6n1 : 1.23 312.04 19.50 0.00 0.00 186622.01 29054.29 133693.44 00:18:04.460 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:04.460 Verification LBA range: start 0x0 length 0x400 00:18:04.460 Nvme7n1 : 1.23 311.52 19.47 0.00 0.00 183789.23 30146.56 124081.49 00:18:04.460 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:04.460 Verification LBA range: start 0x0 length 0x400 00:18:04.460 Nvme8n1 : 1.23 311.01 19.44 0.00 0.00 180861.72 31238.83 144179.20 00:18:04.460 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:04.460 Verification LBA range: start 0x0 length 0x400 00:18:04.460 Nvme9n1 : 1.24 310.50 19.41 0.00 0.00 177936.78 32331.09 165150.72 00:18:04.460 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:04.460 Verification LBA range: start 0x0 length 0x400 00:18:04.460 Nvme10n1 : 1.25 205.54 12.85 0.00 0.00 264029.65 6772.05 548754.77 00:18:04.460 [2024-11-20T17:03:22.996Z] =================================================================================================================== 00:18:04.460 [2024-11-20T17:03:22.996Z] Total : 2985.49 186.59 0.00 0.00 195494.77 3372.37 548754.77 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:18:04.722 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:04.723 rmmod nvme_rdma 00:18:04.723 rmmod nvme_fabrics 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3571026 ']' 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3571026 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3571026 ']' 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3571026 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3571026 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3571026' 00:18:04.723 killing process with pid 3571026 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3571026 00:18:04.723 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3571026 00:18:04.984 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:04.984 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:04.984 00:18:04.984 real 0m15.129s 00:18:04.984 user 0m31.114s 00:18:04.984 sys 0m7.273s 00:18:04.984 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.984 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:04.984 ************************************ 00:18:04.984 END TEST nvmf_shutdown_tc1 00:18:04.984 ************************************ 00:18:04.984 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:18:04.984 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:04.984 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.984 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:05.246 ************************************ 00:18:05.246 START TEST nvmf_shutdown_tc2 00:18:05.246 ************************************ 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.246 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:18:05.246 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:18:05.247 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:18:05.247 Found net devices under 0000:98:00.0: mlx_0_0 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:18:05.247 Found net devices under 0000:98:00.1: mlx_0_1 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:05.247 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:05.247 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:18:05.247 altname enp152s0f0np0 00:18:05.247 altname ens817f0np0 00:18:05.247 inet 192.168.100.8/24 scope global mlx_0_0 00:18:05.247 valid_lft forever preferred_lft forever 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:05.247 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:05.247 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:18:05.247 altname enp152s0f1np1 00:18:05.247 altname ens817f1np1 00:18:05.247 inet 192.168.100.9/24 scope global mlx_0_1 00:18:05.247 valid_lft forever preferred_lft forever 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:05.247 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:05.248 192.168.100.9' 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:05.248 192.168.100.9' 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:05.248 192.168.100.9' 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:05.248 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3572587 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3572587 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3572587 ']' 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.509 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:05.509 [2024-11-20 18:03:23.847931] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:18:05.509 [2024-11-20 18:03:23.847999] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.509 [2024-11-20 18:03:23.941101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.509 [2024-11-20 18:03:23.975224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.509 [2024-11-20 18:03:23.975258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.509 [2024-11-20 18:03:23.975264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.509 [2024-11-20 18:03:23.975269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.509 [2024-11-20 18:03:23.975273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.509 [2024-11-20 18:03:23.976880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.509 [2024-11-20 18:03:23.977045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.509 [2024-11-20 18:03:23.977202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.509 [2024-11-20 18:03:23.977202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:06.452 [2024-11-20 18:03:24.730289] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd19190/0xd1d680) succeed. 00:18:06.452 [2024-11-20 18:03:24.741173] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd1a820/0xd5ed20) succeed. 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.452 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:06.452 Malloc1 00:18:06.452 [2024-11-20 18:03:24.952045] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:06.452 Malloc2 00:18:06.714 Malloc3 00:18:06.714 Malloc4 00:18:06.714 Malloc5 00:18:06.714 Malloc6 00:18:06.714 Malloc7 00:18:06.714 Malloc8 00:18:06.976 Malloc9 00:18:06.976 Malloc10 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3572887 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3572887 /var/tmp/bdevperf.sock 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3572887 ']' 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:06.976 { 00:18:06.976 "params": { 00:18:06.976 "name": "Nvme$subsystem", 00:18:06.976 "trtype": "$TEST_TRANSPORT", 00:18:06.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.976 "adrfam": "ipv4", 00:18:06.976 "trsvcid": "$NVMF_PORT", 00:18:06.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.976 "hdgst": ${hdgst:-false}, 00:18:06.976 "ddgst": ${ddgst:-false} 00:18:06.976 }, 00:18:06.976 "method": "bdev_nvme_attach_controller" 00:18:06.976 } 00:18:06.976 EOF 00:18:06.976 )") 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:06.976 { 00:18:06.976 "params": { 00:18:06.976 "name": "Nvme$subsystem", 00:18:06.976 "trtype": "$TEST_TRANSPORT", 00:18:06.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.976 "adrfam": "ipv4", 00:18:06.976 "trsvcid": "$NVMF_PORT", 00:18:06.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.976 "hdgst": ${hdgst:-false}, 00:18:06.976 "ddgst": ${ddgst:-false} 00:18:06.976 }, 00:18:06.976 "method": "bdev_nvme_attach_controller" 00:18:06.976 } 00:18:06.976 EOF 00:18:06.976 )") 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:06.976 { 00:18:06.976 "params": { 00:18:06.976 "name": "Nvme$subsystem", 00:18:06.976 "trtype": "$TEST_TRANSPORT", 00:18:06.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.976 "adrfam": "ipv4", 00:18:06.976 "trsvcid": "$NVMF_PORT", 00:18:06.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.976 "hdgst": ${hdgst:-false}, 00:18:06.976 "ddgst": ${ddgst:-false} 00:18:06.976 }, 00:18:06.976 "method": "bdev_nvme_attach_controller" 00:18:06.976 } 00:18:06.976 EOF 00:18:06.976 )") 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:06.976 { 00:18:06.976 "params": { 00:18:06.976 "name": "Nvme$subsystem", 00:18:06.976 "trtype": "$TEST_TRANSPORT", 00:18:06.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.976 "adrfam": "ipv4", 00:18:06.976 "trsvcid": "$NVMF_PORT", 00:18:06.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.976 "hdgst": ${hdgst:-false}, 00:18:06.976 "ddgst": ${ddgst:-false} 00:18:06.976 }, 00:18:06.976 "method": "bdev_nvme_attach_controller" 00:18:06.976 } 00:18:06.976 EOF 00:18:06.976 )") 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:06.976 { 00:18:06.976 "params": { 00:18:06.976 "name": "Nvme$subsystem", 00:18:06.976 "trtype": "$TEST_TRANSPORT", 00:18:06.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.976 "adrfam": "ipv4", 00:18:06.976 "trsvcid": "$NVMF_PORT", 00:18:06.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.976 "hdgst": ${hdgst:-false}, 00:18:06.976 "ddgst": ${ddgst:-false} 00:18:06.976 }, 00:18:06.976 "method": "bdev_nvme_attach_controller" 00:18:06.976 } 00:18:06.976 EOF 00:18:06.976 )") 00:18:06.976 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:06.977 { 00:18:06.977 "params": { 00:18:06.977 "name": "Nvme$subsystem", 00:18:06.977 "trtype": "$TEST_TRANSPORT", 00:18:06.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.977 "adrfam": "ipv4", 00:18:06.977 "trsvcid": "$NVMF_PORT", 00:18:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.977 "hdgst": ${hdgst:-false}, 00:18:06.977 "ddgst": ${ddgst:-false} 00:18:06.977 }, 00:18:06.977 "method": "bdev_nvme_attach_controller" 00:18:06.977 } 00:18:06.977 EOF 00:18:06.977 )") 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:06.977 [2024-11-20 18:03:25.410759] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:18:06.977 [2024-11-20 18:03:25.410813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3572887 ] 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:06.977 { 00:18:06.977 "params": { 00:18:06.977 "name": "Nvme$subsystem", 00:18:06.977 "trtype": "$TEST_TRANSPORT", 00:18:06.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.977 "adrfam": "ipv4", 00:18:06.977 "trsvcid": "$NVMF_PORT", 00:18:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.977 "hdgst": ${hdgst:-false}, 00:18:06.977 "ddgst": ${ddgst:-false} 00:18:06.977 }, 00:18:06.977 "method": "bdev_nvme_attach_controller" 00:18:06.977 } 00:18:06.977 EOF 00:18:06.977 )") 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:06.977 { 00:18:06.977 "params": { 00:18:06.977 "name": "Nvme$subsystem", 00:18:06.977 "trtype": "$TEST_TRANSPORT", 00:18:06.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.977 "adrfam": "ipv4", 00:18:06.977 "trsvcid": "$NVMF_PORT", 00:18:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.977 "hdgst": ${hdgst:-false}, 00:18:06.977 "ddgst": ${ddgst:-false} 00:18:06.977 }, 00:18:06.977 "method": "bdev_nvme_attach_controller" 00:18:06.977 } 00:18:06.977 EOF 00:18:06.977 )") 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:06.977 { 00:18:06.977 "params": { 00:18:06.977 "name": "Nvme$subsystem", 00:18:06.977 "trtype": "$TEST_TRANSPORT", 00:18:06.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.977 "adrfam": "ipv4", 00:18:06.977 "trsvcid": "$NVMF_PORT", 00:18:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.977 "hdgst": ${hdgst:-false}, 00:18:06.977 "ddgst": ${ddgst:-false} 00:18:06.977 }, 00:18:06.977 "method": "bdev_nvme_attach_controller" 00:18:06.977 } 00:18:06.977 EOF 00:18:06.977 )") 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:06.977 { 00:18:06.977 "params": { 00:18:06.977 "name": "Nvme$subsystem", 00:18:06.977 "trtype": "$TEST_TRANSPORT", 00:18:06.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.977 "adrfam": "ipv4", 00:18:06.977 "trsvcid": "$NVMF_PORT", 00:18:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.977 "hdgst": ${hdgst:-false}, 00:18:06.977 "ddgst": ${ddgst:-false} 00:18:06.977 }, 00:18:06.977 "method": "bdev_nvme_attach_controller" 00:18:06.977 } 00:18:06.977 EOF 00:18:06.977 )") 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:18:06.977 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:06.977 "params": { 00:18:06.977 "name": "Nvme1", 00:18:06.977 "trtype": "rdma", 00:18:06.977 "traddr": "192.168.100.8", 00:18:06.977 "adrfam": "ipv4", 00:18:06.977 "trsvcid": "4420", 00:18:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.977 "hdgst": false, 00:18:06.977 "ddgst": false 00:18:06.977 }, 00:18:06.977 "method": "bdev_nvme_attach_controller" 00:18:06.977 },{ 00:18:06.977 "params": { 00:18:06.977 "name": "Nvme2", 00:18:06.977 "trtype": "rdma", 00:18:06.977 "traddr": "192.168.100.8", 00:18:06.977 "adrfam": "ipv4", 00:18:06.977 "trsvcid": "4420", 00:18:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:06.977 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:06.977 "hdgst": false, 00:18:06.977 "ddgst": false 00:18:06.977 }, 00:18:06.977 "method": "bdev_nvme_attach_controller" 00:18:06.977 },{ 00:18:06.977 "params": { 00:18:06.977 "name": "Nvme3", 00:18:06.977 "trtype": "rdma", 00:18:06.977 "traddr": "192.168.100.8", 00:18:06.977 "adrfam": "ipv4", 00:18:06.977 "trsvcid": "4420", 00:18:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:06.977 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:06.977 "hdgst": false, 00:18:06.977 "ddgst": false 00:18:06.977 }, 00:18:06.977 "method": "bdev_nvme_attach_controller" 00:18:06.977 },{ 00:18:06.977 "params": { 00:18:06.977 "name": "Nvme4", 00:18:06.977 "trtype": "rdma", 00:18:06.977 "traddr": "192.168.100.8", 00:18:06.977 "adrfam": "ipv4", 00:18:06.977 "trsvcid": "4420", 00:18:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:06.977 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:06.977 "hdgst": false, 00:18:06.978 "ddgst": false 00:18:06.978 }, 00:18:06.978 "method": "bdev_nvme_attach_controller" 00:18:06.978 },{ 00:18:06.978 "params": { 00:18:06.978 "name": "Nvme5", 00:18:06.978 "trtype": "rdma", 00:18:06.978 "traddr": "192.168.100.8", 00:18:06.978 "adrfam": "ipv4", 00:18:06.978 "trsvcid": "4420", 00:18:06.978 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:06.978 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:06.978 "hdgst": false, 00:18:06.978 "ddgst": false 00:18:06.978 }, 00:18:06.978 "method": "bdev_nvme_attach_controller" 00:18:06.978 },{ 00:18:06.978 "params": { 00:18:06.978 "name": "Nvme6", 00:18:06.978 "trtype": "rdma", 00:18:06.978 "traddr": "192.168.100.8", 00:18:06.978 "adrfam": "ipv4", 00:18:06.978 "trsvcid": "4420", 00:18:06.978 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:06.978 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:06.978 "hdgst": false, 00:18:06.978 "ddgst": false 00:18:06.978 }, 00:18:06.978 "method": "bdev_nvme_attach_controller" 00:18:06.978 },{ 00:18:06.978 "params": { 00:18:06.978 "name": "Nvme7", 00:18:06.978 "trtype": "rdma", 00:18:06.978 "traddr": "192.168.100.8", 00:18:06.978 "adrfam": "ipv4", 00:18:06.978 "trsvcid": "4420", 00:18:06.978 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:06.978 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:06.978 "hdgst": false, 00:18:06.978 "ddgst": false 00:18:06.978 }, 00:18:06.978 "method": "bdev_nvme_attach_controller" 00:18:06.978 },{ 00:18:06.978 "params": { 00:18:06.978 "name": "Nvme8", 00:18:06.978 "trtype": "rdma", 00:18:06.978 "traddr": "192.168.100.8", 00:18:06.978 "adrfam": "ipv4", 00:18:06.978 "trsvcid": "4420", 00:18:06.978 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:06.978 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:06.978 "hdgst": false, 00:18:06.978 "ddgst": false 00:18:06.978 }, 00:18:06.978 "method": "bdev_nvme_attach_controller" 00:18:06.978 },{ 00:18:06.978 "params": { 00:18:06.978 "name": "Nvme9", 00:18:06.978 "trtype": "rdma", 00:18:06.978 "traddr": "192.168.100.8", 00:18:06.978 "adrfam": "ipv4", 00:18:06.978 "trsvcid": "4420", 00:18:06.978 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:06.978 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:06.978 "hdgst": false, 00:18:06.978 "ddgst": false 00:18:06.978 }, 00:18:06.978 "method": "bdev_nvme_attach_controller" 00:18:06.978 },{ 00:18:06.978 "params": { 00:18:06.978 "name": "Nvme10", 00:18:06.978 "trtype": "rdma", 00:18:06.978 "traddr": "192.168.100.8", 00:18:06.978 "adrfam": "ipv4", 00:18:06.978 "trsvcid": "4420", 00:18:06.978 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:06.978 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:06.978 "hdgst": false, 00:18:06.978 "ddgst": false 00:18:06.978 }, 00:18:06.978 "method": "bdev_nvme_attach_controller" 00:18:06.978 }' 00:18:06.978 [2024-11-20 18:03:25.489925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.239 [2024-11-20 18:03:25.526097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.179 Running I/O for 10 seconds... 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.179 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:08.440 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.440 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:18:08.440 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:18:08.440 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3572887 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3572887 ']' 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3572887 00:18:08.701 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:18:08.962 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.962 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3572887 00:18:08.962 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.962 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.962 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3572887' 00:18:08.962 killing process with pid 3572887 00:18:08.962 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3572887 00:18:08.962 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3572887 00:18:09.223 2407.00 IOPS, 150.44 MiB/s [2024-11-20T17:03:27.759Z] Received shutdown signal, test time was about 1.062276 seconds 00:18:09.223 00:18:09.223 Latency(us) 00:18:09.223 [2024-11-20T17:03:27.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.223 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:09.223 Verification LBA range: start 0x0 length 0x400 00:18:09.223 Nvme1n1 : 1.04 268.23 16.76 0.00 0.00 231385.77 25012.91 246415.36 00:18:09.223 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:09.223 Verification LBA range: start 0x0 length 0x400 00:18:09.223 Nvme2n1 : 1.05 259.30 16.21 0.00 0.00 234360.32 23374.51 234181.97 00:18:09.223 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:09.223 Verification LBA range: start 0x0 length 0x400 00:18:09.223 Nvme3n1 : 1.05 305.67 19.10 0.00 0.00 197019.14 3850.24 174762.67 00:18:09.223 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:09.223 Verification LBA range: start 0x0 length 0x400 00:18:09.223 Nvme4n1 : 1.05 305.16 19.07 0.00 0.00 193638.23 11523.41 167772.16 00:18:09.223 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:09.223 Verification LBA range: start 0x0 length 0x400 00:18:09.223 Nvme5n1 : 1.05 304.56 19.03 0.00 0.00 191313.58 16930.13 157286.40 00:18:09.223 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:09.223 Verification LBA range: start 0x0 length 0x400 00:18:09.223 Nvme6n1 : 1.05 303.90 18.99 0.00 0.00 188534.78 20097.71 140683.95 00:18:09.223 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:09.223 Verification LBA range: start 0x0 length 0x400 00:18:09.223 Nvme7n1 : 1.05 303.40 18.96 0.00 0.00 183462.23 19114.67 129324.37 00:18:09.223 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:09.223 Verification LBA range: start 0x0 length 0x400 00:18:09.223 Nvme8n1 : 1.06 302.81 18.93 0.00 0.00 180923.90 18350.08 119712.43 00:18:09.223 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:09.223 Verification LBA range: start 0x0 length 0x400 00:18:09.223 Nvme9n1 : 1.06 302.17 18.89 0.00 0.00 177986.56 17913.17 131072.00 00:18:09.223 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:09.223 Verification LBA range: start 0x0 length 0x400 00:18:09.223 Nvme10n1 : 1.06 241.23 15.08 0.00 0.00 217896.96 17476.27 258648.75 00:18:09.223 [2024-11-20T17:03:27.759Z] =================================================================================================================== 00:18:09.223 [2024-11-20T17:03:27.759Z] Total : 2896.44 181.03 0.00 0.00 198294.18 3850.24 258648.75 00:18:09.223 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3572587 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:10.607 rmmod nvme_rdma 00:18:10.607 rmmod nvme_fabrics 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3572587 ']' 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3572587 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3572587 ']' 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3572587 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3572587 00:18:10.607 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:10.608 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:10.608 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3572587' 00:18:10.608 killing process with pid 3572587 00:18:10.608 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3572587 00:18:10.608 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3572587 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:10.868 00:18:10.868 real 0m5.619s 00:18:10.868 user 0m22.984s 00:18:10.868 sys 0m1.023s 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:10.868 ************************************ 00:18:10.868 END TEST nvmf_shutdown_tc2 00:18:10.868 ************************************ 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:10.868 ************************************ 00:18:10.868 START TEST nvmf_shutdown_tc3 00:18:10.868 ************************************ 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:18:10.868 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:18:10.869 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:18:10.869 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:18:10.869 Found net devices under 0000:98:00.0: mlx_0_0 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:18:10.869 Found net devices under 0000:98:00.1: mlx_0_1 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:10.869 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.869 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:18:10.869 altname enp152s0f0np0 00:18:10.869 altname ens817f0np0 00:18:10.869 inet 192.168.100.8/24 scope global mlx_0_0 00:18:10.869 valid_lft forever preferred_lft forever 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:10.869 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.869 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:18:10.869 altname enp152s0f1np1 00:18:10.869 altname ens817f1np1 00:18:10.869 inet 192.168.100.9/24 scope global mlx_0_1 00:18:10.869 valid_lft forever preferred_lft forever 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:18:10.869 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:10.870 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:10.870 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:10.870 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:10.870 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:10.870 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.870 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:10.870 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:10.870 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:11.130 192.168.100.9' 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:11.130 192.168.100.9' 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:11.130 192.168.100.9' 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3573818 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3573818 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3573818 ']' 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.130 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:11.130 [2024-11-20 18:03:29.541098] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:18:11.130 [2024-11-20 18:03:29.541148] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.130 [2024-11-20 18:03:29.629847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.130 [2024-11-20 18:03:29.658892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.130 [2024-11-20 18:03:29.658922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.130 [2024-11-20 18:03:29.658927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.130 [2024-11-20 18:03:29.658932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.130 [2024-11-20 18:03:29.658936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.130 [2024-11-20 18:03:29.660416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.130 [2024-11-20 18:03:29.660572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.130 [2024-11-20 18:03:29.660726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.130 [2024-11-20 18:03:29.660728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:11.390 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.390 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:18:11.390 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:11.390 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.390 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:11.390 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.390 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:11.390 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.390 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:11.390 [2024-11-20 18:03:29.805524] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1972190/0x1976680) succeed. 00:18:11.390 [2024-11-20 18:03:29.816739] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1973820/0x19b7d20) succeed. 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:11.650 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:11.651 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:18:11.651 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:11.651 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.651 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:11.651 Malloc1 00:18:11.651 [2024-11-20 18:03:30.023407] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:11.651 Malloc2 00:18:11.651 Malloc3 00:18:11.651 Malloc4 00:18:11.651 Malloc5 00:18:11.911 Malloc6 00:18:11.911 Malloc7 00:18:11.911 Malloc8 00:18:11.911 Malloc9 00:18:11.911 Malloc10 00:18:11.911 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.911 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:11.911 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.911 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:11.911 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3573982 00:18:11.911 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3573982 /var/tmp/bdevperf.sock 00:18:11.911 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3573982 ']' 00:18:11.911 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.911 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.911 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:11.912 { 00:18:11.912 "params": { 00:18:11.912 "name": "Nvme$subsystem", 00:18:11.912 "trtype": "$TEST_TRANSPORT", 00:18:11.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.912 "adrfam": "ipv4", 00:18:11.912 "trsvcid": "$NVMF_PORT", 00:18:11.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.912 "hdgst": ${hdgst:-false}, 00:18:11.912 "ddgst": ${ddgst:-false} 00:18:11.912 }, 00:18:11.912 "method": "bdev_nvme_attach_controller" 00:18:11.912 } 00:18:11.912 EOF 00:18:11.912 )") 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:11.912 { 00:18:11.912 "params": { 00:18:11.912 "name": "Nvme$subsystem", 00:18:11.912 "trtype": "$TEST_TRANSPORT", 00:18:11.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.912 "adrfam": "ipv4", 00:18:11.912 "trsvcid": "$NVMF_PORT", 00:18:11.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.912 "hdgst": ${hdgst:-false}, 00:18:11.912 "ddgst": ${ddgst:-false} 00:18:11.912 }, 00:18:11.912 "method": "bdev_nvme_attach_controller" 00:18:11.912 } 00:18:11.912 EOF 00:18:11.912 )") 00:18:11.912 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:12.174 { 00:18:12.174 "params": { 00:18:12.174 "name": "Nvme$subsystem", 00:18:12.174 "trtype": "$TEST_TRANSPORT", 00:18:12.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.174 "adrfam": "ipv4", 00:18:12.174 "trsvcid": "$NVMF_PORT", 00:18:12.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.174 "hdgst": ${hdgst:-false}, 00:18:12.174 "ddgst": ${ddgst:-false} 00:18:12.174 }, 00:18:12.174 "method": "bdev_nvme_attach_controller" 00:18:12.174 } 00:18:12.174 EOF 00:18:12.174 )") 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:12.174 { 00:18:12.174 "params": { 00:18:12.174 "name": "Nvme$subsystem", 00:18:12.174 "trtype": "$TEST_TRANSPORT", 00:18:12.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.174 "adrfam": "ipv4", 00:18:12.174 "trsvcid": "$NVMF_PORT", 00:18:12.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.174 "hdgst": ${hdgst:-false}, 00:18:12.174 "ddgst": ${ddgst:-false} 00:18:12.174 }, 00:18:12.174 "method": "bdev_nvme_attach_controller" 00:18:12.174 } 00:18:12.174 EOF 00:18:12.174 )") 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:12.174 { 00:18:12.174 "params": { 00:18:12.174 "name": "Nvme$subsystem", 00:18:12.174 "trtype": "$TEST_TRANSPORT", 00:18:12.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.174 "adrfam": "ipv4", 00:18:12.174 "trsvcid": "$NVMF_PORT", 00:18:12.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.174 "hdgst": ${hdgst:-false}, 00:18:12.174 "ddgst": ${ddgst:-false} 00:18:12.174 }, 00:18:12.174 "method": "bdev_nvme_attach_controller" 00:18:12.174 } 00:18:12.174 EOF 00:18:12.174 )") 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:12.174 { 00:18:12.174 "params": { 00:18:12.174 "name": "Nvme$subsystem", 00:18:12.174 "trtype": "$TEST_TRANSPORT", 00:18:12.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.174 "adrfam": "ipv4", 00:18:12.174 "trsvcid": "$NVMF_PORT", 00:18:12.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.174 "hdgst": ${hdgst:-false}, 00:18:12.174 "ddgst": ${ddgst:-false} 00:18:12.174 }, 00:18:12.174 "method": "bdev_nvme_attach_controller" 00:18:12.174 } 00:18:12.174 EOF 00:18:12.174 )") 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:12.174 { 00:18:12.174 "params": { 00:18:12.174 "name": "Nvme$subsystem", 00:18:12.174 "trtype": "$TEST_TRANSPORT", 00:18:12.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.174 "adrfam": "ipv4", 00:18:12.174 "trsvcid": "$NVMF_PORT", 00:18:12.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.174 "hdgst": ${hdgst:-false}, 00:18:12.174 "ddgst": ${ddgst:-false} 00:18:12.174 }, 00:18:12.174 "method": "bdev_nvme_attach_controller" 00:18:12.174 } 00:18:12.174 EOF 00:18:12.174 )") 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:18:12.174 [2024-11-20 18:03:30.483860] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:18:12.174 [2024-11-20 18:03:30.483975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573982 ] 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:12.174 { 00:18:12.174 "params": { 00:18:12.174 "name": "Nvme$subsystem", 00:18:12.174 "trtype": "$TEST_TRANSPORT", 00:18:12.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.174 "adrfam": "ipv4", 00:18:12.174 "trsvcid": "$NVMF_PORT", 00:18:12.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.174 "hdgst": ${hdgst:-false}, 00:18:12.174 "ddgst": ${ddgst:-false} 00:18:12.174 }, 00:18:12.174 "method": "bdev_nvme_attach_controller" 00:18:12.174 } 00:18:12.174 EOF 00:18:12.174 )") 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:12.174 { 00:18:12.174 "params": { 00:18:12.174 "name": "Nvme$subsystem", 00:18:12.174 "trtype": "$TEST_TRANSPORT", 00:18:12.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.174 "adrfam": "ipv4", 00:18:12.174 "trsvcid": "$NVMF_PORT", 00:18:12.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.174 "hdgst": ${hdgst:-false}, 00:18:12.174 "ddgst": ${ddgst:-false} 00:18:12.174 }, 00:18:12.174 "method": "bdev_nvme_attach_controller" 00:18:12.174 } 00:18:12.174 EOF 00:18:12.174 )") 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:12.174 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:12.174 { 00:18:12.174 "params": { 00:18:12.174 "name": "Nvme$subsystem", 00:18:12.174 "trtype": "$TEST_TRANSPORT", 00:18:12.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "$NVMF_PORT", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.175 "hdgst": ${hdgst:-false}, 00:18:12.175 "ddgst": ${ddgst:-false} 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 } 00:18:12.175 EOF 00:18:12.175 )") 00:18:12.175 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:18:12.175 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:18:12.175 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:18:12.175 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:12.175 "params": { 00:18:12.175 "name": "Nvme1", 00:18:12.175 "trtype": "rdma", 00:18:12.175 "traddr": "192.168.100.8", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "4420", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.175 "hdgst": false, 00:18:12.175 "ddgst": false 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 },{ 00:18:12.175 "params": { 00:18:12.175 "name": "Nvme2", 00:18:12.175 "trtype": "rdma", 00:18:12.175 "traddr": "192.168.100.8", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "4420", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:12.175 "hdgst": false, 00:18:12.175 "ddgst": false 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 },{ 00:18:12.175 "params": { 00:18:12.175 "name": "Nvme3", 00:18:12.175 "trtype": "rdma", 00:18:12.175 "traddr": "192.168.100.8", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "4420", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:12.175 "hdgst": false, 00:18:12.175 "ddgst": false 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 },{ 00:18:12.175 "params": { 00:18:12.175 "name": "Nvme4", 00:18:12.175 "trtype": "rdma", 00:18:12.175 "traddr": "192.168.100.8", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "4420", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:12.175 "hdgst": false, 00:18:12.175 "ddgst": false 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 },{ 00:18:12.175 "params": { 00:18:12.175 "name": "Nvme5", 00:18:12.175 "trtype": "rdma", 00:18:12.175 "traddr": "192.168.100.8", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "4420", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:12.175 "hdgst": false, 00:18:12.175 "ddgst": false 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 },{ 00:18:12.175 "params": { 00:18:12.175 "name": "Nvme6", 00:18:12.175 "trtype": "rdma", 00:18:12.175 "traddr": "192.168.100.8", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "4420", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:12.175 "hdgst": false, 00:18:12.175 "ddgst": false 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 },{ 00:18:12.175 "params": { 00:18:12.175 "name": "Nvme7", 00:18:12.175 "trtype": "rdma", 00:18:12.175 "traddr": "192.168.100.8", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "4420", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:12.175 "hdgst": false, 00:18:12.175 "ddgst": false 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 },{ 00:18:12.175 "params": { 00:18:12.175 "name": "Nvme8", 00:18:12.175 "trtype": "rdma", 00:18:12.175 "traddr": "192.168.100.8", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "4420", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:12.175 "hdgst": false, 00:18:12.175 "ddgst": false 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 },{ 00:18:12.175 "params": { 00:18:12.175 "name": "Nvme9", 00:18:12.175 "trtype": "rdma", 00:18:12.175 "traddr": "192.168.100.8", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "4420", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:12.175 "hdgst": false, 00:18:12.175 "ddgst": false 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 },{ 00:18:12.175 "params": { 00:18:12.175 "name": "Nvme10", 00:18:12.175 "trtype": "rdma", 00:18:12.175 "traddr": "192.168.100.8", 00:18:12.175 "adrfam": "ipv4", 00:18:12.175 "trsvcid": "4420", 00:18:12.175 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:12.175 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:12.175 "hdgst": false, 00:18:12.175 "ddgst": false 00:18:12.175 }, 00:18:12.175 "method": "bdev_nvme_attach_controller" 00:18:12.175 }' 00:18:12.175 [2024-11-20 18:03:30.566574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.175 [2024-11-20 18:03:30.603090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.116 Running I/O for 10 seconds... 00:18:13.116 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.116 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:18:13.116 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:13.116 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.116 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:18:13.377 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:18:13.637 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:18:13.637 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:18:13.637 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:13.637 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:18:13.637 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.637 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3573818 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3573818 ']' 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3573818 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3573818 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3573818' 00:18:13.897 killing process with pid 3573818 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3573818 00:18:13.897 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3573818 00:18:14.157 2142.00 IOPS, 133.88 MiB/s [2024-11-20T17:03:32.693Z] 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:18:15.102 [2024-11-20 18:03:33.354722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.102 [2024-11-20 18:03:33.354768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:15.102 [2024-11-20 18:03:33.354781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.102 [2024-11-20 18:03:33.354789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:15.102 [2024-11-20 18:03:33.354798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.102 [2024-11-20 18:03:33.354805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:15.102 [2024-11-20 18:03:33.354814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.102 [2024-11-20 18:03:33.354822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:15.102 [2024-11-20 18:03:33.357095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.102 [2024-11-20 18:03:33.357139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:15.102 [2024-11-20 18:03:33.357188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.102 [2024-11-20 18:03:33.357214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.102 [2024-11-20 18:03:33.357238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.102 [2024-11-20 18:03:33.357260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.357284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.357306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.357330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.357351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.359831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.103 [2024-11-20 18:03:33.359848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:18:15.103 [2024-11-20 18:03:33.360033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.360060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.360085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.360107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.360131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.360153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.360188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.360211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.363338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.103 [2024-11-20 18:03:33.363409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:18:15.103 [2024-11-20 18:03:33.363482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.363515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.363543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.363565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.363589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.363610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.363633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.363655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.365897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.103 [2024-11-20 18:03:33.365909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:18:15.103 [2024-11-20 18:03:33.365924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.365932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.365940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.365947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.365955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.365962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.365969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.365976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.368554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.103 [2024-11-20 18:03:33.368586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:18:15.103 [2024-11-20 18:03:33.368631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.368666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.368692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.368714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.368737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.368758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.368781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.103 [2024-11-20 18:03:33.368802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.103 [2024-11-20 18:03:33.371771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.103 [2024-11-20 18:03:33.371805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:18:15.103 [2024-11-20 18:03:33.374656] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.377203] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.379614] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.382207] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.384960] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.385214] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.385257] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.385390] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.385427] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.385463] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.385492] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.385676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:15.103 [2024-11-20 18:03:33.385689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:18:15.103 [2024-11-20 18:03:33.385699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:18:15.103 [2024-11-20 18:03:33.385709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:18:15.103 [2024-11-20 18:03:33.385717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:18:15.103 [2024-11-20 18:03:33.397371] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:18:15.103 [2024-11-20 18:03:33.397447] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:15.103 [2024-11-20 18:03:33.397457] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:15.103 [2024-11-20 18:03:33.397463] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170cf040 00:18:15.103 [2024-11-20 18:03:33.397641] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:15.103 [2024-11-20 18:03:33.397650] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:15.103 [2024-11-20 18:03:33.397655] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ad8c0 00:18:15.103 [2024-11-20 18:03:33.397801] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:15.103 [2024-11-20 18:03:33.397810] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:15.103 [2024-11-20 18:03:33.397815] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d7000 00:18:15.103 [2024-11-20 18:03:33.397991] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:15.103 [2024-11-20 18:03:33.398000] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:15.103 [2024-11-20 18:03:33.398006] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170e4280 00:18:15.103 [2024-11-20 18:03:33.398192] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:15.103 [2024-11-20 18:03:33.398201] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:15.103 [2024-11-20 18:03:33.398206] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:18:15.104 [2024-11-20 18:03:33.405843] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:18:15.104 [2024-11-20 18:03:33.408696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d270000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d291000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2b2000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2d3000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2f4000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d315000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d336000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d357000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d378000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d399000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3ba000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3db000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.408990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3fc000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.408997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d41d000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d43e000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d45f000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e7d000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e5c000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e3b000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e1a000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009df9000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009dd8000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009db7000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d96000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d75000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d54000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d33000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d12000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009cf1000 len:0x10000 key:0x184700 00:18:15.104 [2024-11-20 18:03:33.409331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.104 [2024-11-20 18:03:33.409343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009cd0000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a0cf000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a0ae000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a2df000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a2be000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a29d000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a27c000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a25b000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a23a000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a219000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a1f8000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a1d7000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a1b6000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a195000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a174000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a153000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a132000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a111000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a0f0000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ef000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ce000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ad000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a48c000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a46b000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a44a000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a429000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a408000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3e7000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3c6000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3a5000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.409981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a384000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.409989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.410001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a363000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.410009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.410023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a342000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.410032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.410044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a321000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.410052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.410064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a300000 len:0x10000 key:0x184700 00:18:15.105 [2024-11-20 18:03:33.410072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.105 [2024-11-20 18:03:33.413795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:18:15.106 [2024-11-20 18:03:33.416087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f3fa80 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f2fa00 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f1f980 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f0f900 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eff880 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eef800 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001edf780 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ecf700 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ebf680 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eaf600 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e9f580 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e8f500 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e7f480 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e6f400 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e5f380 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e4f300 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e3f280 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e2f200 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e1f180 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e0f100 len:0x10000 key:0x183b00 00:18:15.106 [2024-11-20 18:03:33.416440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021f0000 len:0x10000 key:0x184300 00:18:15.106 [2024-11-20 18:03:33.416457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021dff80 len:0x10000 key:0x184300 00:18:15.106 [2024-11-20 18:03:33.416473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021cff00 len:0x10000 key:0x184300 00:18:15.106 [2024-11-20 18:03:33.416491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd02000 len:0x10000 key:0x184700 00:18:15.106 [2024-11-20 18:03:33.416507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dce1000 len:0x10000 key:0x184700 00:18:15.106 [2024-11-20 18:03:33.416524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcc0000 len:0x10000 key:0x184700 00:18:15.106 [2024-11-20 18:03:33.416541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008308000 len:0x10000 key:0x184700 00:18:15.106 [2024-11-20 18:03:33.416558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000082e7000 len:0x10000 key:0x184700 00:18:15.106 [2024-11-20 18:03:33.416575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ad50000 len:0x10000 key:0x184700 00:18:15.106 [2024-11-20 18:03:33.416592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.106 [2024-11-20 18:03:33.416602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed1f000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009388000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009367000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009346000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009325000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009304000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000092e3000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000092c2000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000092a1000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009280000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6ff000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6de000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6bd000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a69c000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a67b000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a65a000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a639000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a618000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a5f7000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a5d6000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a5b5000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a594000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.416983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.416992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a573000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.417000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.417009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a552000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.417017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.417029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a531000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.417036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.417046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a510000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.417053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.417062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000082c6000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.417070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.417080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000082a5000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.417087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.417097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008284000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.417104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.107 [2024-11-20 18:03:33.417114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008263000 len:0x10000 key:0x184700 00:18:15.107 [2024-11-20 18:03:33.417122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.417131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008242000 len:0x10000 key:0x184700 00:18:15.108 [2024-11-20 18:03:33.417139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.417148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008221000 len:0x10000 key:0x184700 00:18:15.108 [2024-11-20 18:03:33.417156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.417165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008200000 len:0x10000 key:0x184700 00:18:15.108 [2024-11-20 18:03:33.417173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.417182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a90f000 len:0x10000 key:0x184700 00:18:15.108 [2024-11-20 18:03:33.417190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.417199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a8ee000 len:0x10000 key:0x184700 00:18:15.108 [2024-11-20 18:03:33.417207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022ff880 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022ef800 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022df780 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022cf700 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022bf680 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022af600 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100229f580 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100228f500 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100227f480 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100226f400 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100225f380 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100224f300 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100223f280 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100222f200 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100221f180 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100220f100 len:0x10000 key:0x184000 00:18:15.108 [2024-11-20 18:03:33.419886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025f0000 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.419903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025dff80 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.419920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025cff00 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.419938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025bfe80 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.419955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025afe00 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.419972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100259fd80 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.419988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.419998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100258fd00 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.420007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.420017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100257fc80 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.420024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.420034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100256fc00 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.420041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.420051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100255fb80 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.420058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.420068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100254fb00 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.420075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.420085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100253fa80 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.420092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.420102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100252fa00 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.420109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.420119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100251f980 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.420127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.420137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100250f900 len:0x10000 key:0x184100 00:18:15.108 [2024-11-20 18:03:33.420144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.108 [2024-11-20 18:03:33.420153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ff880 len:0x10000 key:0x184100 00:18:15.109 [2024-11-20 18:03:33.420161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c631000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c610000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e28d000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e26c000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e24b000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e22a000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e209000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e1e8000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e1c7000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e1a6000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e185000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e164000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e143000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e122000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e101000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0e0000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a7e6000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a7c5000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a7a4000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a783000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a762000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a741000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a720000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ab1f000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aafe000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aadd000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aabc000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aa9b000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aa7a000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aa59000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aa38000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.420706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aa17000 len:0x10000 key:0x184700 00:18:15.109 [2024-11-20 18:03:33.420713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.423425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100266f400 len:0x10000 key:0x184c00 00:18:15.109 [2024-11-20 18:03:33.423438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.423449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100265f380 len:0x10000 key:0x184c00 00:18:15.109 [2024-11-20 18:03:33.423456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.423466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100264f300 len:0x10000 key:0x184c00 00:18:15.109 [2024-11-20 18:03:33.423474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.109 [2024-11-20 18:03:33.423483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100263f280 len:0x10000 key:0x184c00 00:18:15.110 [2024-11-20 18:03:33.423493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100262f200 len:0x10000 key:0x184c00 00:18:15.110 [2024-11-20 18:03:33.423511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100261f180 len:0x10000 key:0x184c00 00:18:15.110 [2024-11-20 18:03:33.423528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100260f100 len:0x10000 key:0x184c00 00:18:15.110 [2024-11-20 18:03:33.423545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029f0000 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029dff80 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029cff00 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029bfe80 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029afe00 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100299fd80 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100298fd00 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100297fc80 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100296fc00 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100295fb80 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100294fb00 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100293fa80 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100292fa00 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100291f980 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100290f900 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028ff880 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028ef800 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028df780 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028cf700 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028bf680 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028af600 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100289f580 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100288f500 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100287f480 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100286f400 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.423989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100285f380 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.423996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.424006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100284f300 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.424013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.424023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100283f280 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.424030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.424039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100282f200 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.424047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.424056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100281f180 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.424064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.424073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100280f100 len:0x10000 key:0x183f00 00:18:15.110 [2024-11-20 18:03:33.424082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.424092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bf0000 len:0x10000 key:0x184d00 00:18:15.110 [2024-11-20 18:03:33.424099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.110 [2024-11-20 18:03:33.424109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bdff80 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bcff00 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bbfe80 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bafe00 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b9fd80 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b8fd00 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b7fc80 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b6fc00 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b5fb80 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b4fb00 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b3fa80 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b2fa00 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b1f980 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b0f900 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aff880 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.424356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100270f900 len:0x10000 key:0x184c00 00:18:15.111 [2024-11-20 18:03:33.424372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb59000 len:0x10000 key:0x184700 00:18:15.111 [2024-11-20 18:03:33.424389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e818000 len:0x10000 key:0x184700 00:18:15.111 [2024-11-20 18:03:33.424406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7f7000 len:0x10000 key:0x184700 00:18:15.111 [2024-11-20 18:03:33.424423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b48000 len:0x10000 key:0x184700 00:18:15.111 [2024-11-20 18:03:33.424440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b27000 len:0x10000 key:0x184700 00:18:15.111 [2024-11-20 18:03:33.424458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7b9000 len:0x10000 key:0x184700 00:18:15.111 [2024-11-20 18:03:33.424475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7da000 len:0x10000 key:0x184700 00:18:15.111 [2024-11-20 18:03:33.424492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000098b0000 len:0x10000 key:0x184700 00:18:15.111 [2024-11-20 18:03:33.424508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.424518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009caf000 len:0x10000 key:0x184700 00:18:15.111 [2024-11-20 18:03:33.424525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.427180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf780 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.427192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.111 [2024-11-20 18:03:33.427204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf700 len:0x10000 key:0x184d00 00:18:15.111 [2024-11-20 18:03:33.427211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf680 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf600 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f580 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8f500 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7f480 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6f400 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5f380 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4f300 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3f280 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2f200 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1f180 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0f100 len:0x10000 key:0x184d00 00:18:15.112 [2024-11-20 18:03:33.427419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002df0000 len:0x10000 key:0x184200 00:18:15.112 [2024-11-20 18:03:33.427436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff80 len:0x10000 key:0x184200 00:18:15.112 [2024-11-20 18:03:33.427452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcff00 len:0x10000 key:0x184200 00:18:15.112 [2024-11-20 18:03:33.427469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfe80 len:0x10000 key:0x184200 00:18:15.112 [2024-11-20 18:03:33.427486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafe00 len:0x10000 key:0x184200 00:18:15.112 [2024-11-20 18:03:33.427505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fd80 len:0x10000 key:0x184200 00:18:15.112 [2024-11-20 18:03:33.427522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fd00 len:0x10000 key:0x184200 00:18:15.112 [2024-11-20 18:03:33.427539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.112 [2024-11-20 18:03:33.427549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fc80 len:0x10000 key:0x184200 00:18:15.112 [2024-11-20 18:03:33.427556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6fc00 len:0x10000 key:0x184200 00:18:15.113 [2024-11-20 18:03:33.427573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5fb80 len:0x10000 key:0x184200 00:18:15.113 [2024-11-20 18:03:33.427590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4fb00 len:0x10000 key:0x184200 00:18:15.113 [2024-11-20 18:03:33.427606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3fa80 len:0x10000 key:0x184200 00:18:15.113 [2024-11-20 18:03:33.427623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2fa00 len:0x10000 key:0x184200 00:18:15.113 [2024-11-20 18:03:33.427640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f980 len:0x10000 key:0x184200 00:18:15.113 [2024-11-20 18:03:33.427657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f900 len:0x10000 key:0x184200 00:18:15.113 [2024-11-20 18:03:33.427674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff880 len:0x10000 key:0x184200 00:18:15.113 [2024-11-20 18:03:33.427692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef800 len:0x10000 key:0x184200 00:18:15.113 [2024-11-20 18:03:33.427710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf780 len:0x10000 key:0x184200 00:18:15.113 [2024-11-20 18:03:33.427726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce50000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce71000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce92000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceb3000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ced4000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cef5000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf16000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf37000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf58000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf79000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf9a000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfbb000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfdc000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.427965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cffd000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.427973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.433066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d01e000 len:0x10000 key:0x184700 00:18:15.113 [2024-11-20 18:03:33.433094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.113 [2024-11-20 18:03:33.433106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d03f000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e07d000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e05c000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e03b000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e01a000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff9000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd8000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfb7000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df96000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df75000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df54000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df33000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df12000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000def1000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ded0000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e2cf000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.433390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e2ae000 len:0x10000 key:0x184700 00:18:15.114 [2024-11-20 18:03:33.433397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:bed0c000 sqhd:7250 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.436849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.114 [2024-11-20 18:03:33.436987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:9840ba90 sqhd:f040 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.437018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.114 [2024-11-20 18:03:33.437041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:9840ba90 sqhd:f040 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.437065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.114 [2024-11-20 18:03:33.437087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:9840ba90 sqhd:f040 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.437110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.114 [2024-11-20 18:03:33.437132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:9840ba90 sqhd:f040 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.439899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.114 [2024-11-20 18:03:33.439937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:18:15.114 [2024-11-20 18:03:33.439961] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:18:15.114 [2024-11-20 18:03:33.440002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.114 [2024-11-20 18:03:33.440027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.440051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.114 [2024-11-20 18:03:33.440072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.114 [2024-11-20 18:03:33.440095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.114 [2024-11-20 18:03:33.440116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.115 [2024-11-20 18:03:33.440140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.115 [2024-11-20 18:03:33.440161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.115 [2024-11-20 18:03:33.442911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.115 [2024-11-20 18:03:33.442981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:18:15.115 [2024-11-20 18:03:33.443009] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:18:15.115 [2024-11-20 18:03:33.443065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.115 [2024-11-20 18:03:33.443104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.115 [2024-11-20 18:03:33.443131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.115 [2024-11-20 18:03:33.443153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.115 [2024-11-20 18:03:33.443176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.115 [2024-11-20 18:03:33.443198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.115 [2024-11-20 18:03:33.443221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.115 [2024-11-20 18:03:33.443243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.115 [2024-11-20 18:03:33.445960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.115 [2024-11-20 18:03:33.446007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:18:15.115 [2024-11-20 18:03:33.446030] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:18:15.115 [2024-11-20 18:03:33.446067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.115 [2024-11-20 18:03:33.446090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.115 [2024-11-20 18:03:33.446114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.115 [2024-11-20 18:03:33.446135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.115 [2024-11-20 18:03:33.446160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.115 [2024-11-20 18:03:33.446181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.115 [2024-11-20 18:03:33.446205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.115 [2024-11-20 18:03:33.446226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:18:15.115 [2024-11-20 18:03:33.468153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.115 [2024-11-20 18:03:33.468166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:18:15.115 [2024-11-20 18:03:33.468174] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:18:15.115 [2024-11-20 18:03:33.468228] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:15.115 [2024-11-20 18:03:33.468237] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:15.115 [2024-11-20 18:03:33.468243] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017090040 00:18:15.115 [2024-11-20 18:03:33.478956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:18:15.115 [2024-11-20 18:03:33.478981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:18:15.115 [2024-11-20 18:03:33.479044] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:18:15.115 [2024-11-20 18:03:33.479059] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:18:15.115 [2024-11-20 18:03:33.479192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:18:15.115 [2024-11-20 18:03:33.479205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:18:15.115 task offset: 16384 on job bdev=Nvme10n1 fails 00:18:15.115 00:18:15.115 Latency(us) 00:18:15.115 [2024-11-20T17:03:33.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.115 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:15.115 Job: Nvme1n1 ended in about 1.94 seconds with error 00:18:15.115 Verification LBA range: start 0x0 length 0x400 00:18:15.115 Nvme1n1 : 1.94 111.25 6.95 32.96 0.00 440109.35 13052.59 1048576.00 00:18:15.115 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:15.115 Job: Nvme2n1 ended in about 1.94 seconds with error 00:18:15.115 Verification LBA range: start 0x0 length 0x400 00:18:15.115 Nvme2n1 : 1.94 111.18 6.95 32.94 0.00 436175.24 18568.53 1048576.00 00:18:15.115 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:15.115 Job: Nvme3n1 ended in about 1.94 seconds with error 00:18:15.115 Verification LBA range: start 0x0 length 0x400 00:18:15.115 Nvme3n1 : 1.94 114.21 7.14 32.92 0.00 423315.20 28180.48 1048576.00 00:18:15.115 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:15.115 Job: Nvme4n1 ended in about 1.95 seconds with error 00:18:15.115 Verification LBA range: start 0x0 length 0x400 00:18:15.115 Nvme4n1 : 1.95 118.77 7.42 32.90 0.00 406488.38 4068.69 1055566.51 00:18:15.115 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:15.115 Job: Nvme5n1 ended in about 1.95 seconds with error 00:18:15.115 Verification LBA range: start 0x0 length 0x400 00:18:15.115 Nvme5n1 : 1.95 106.88 6.68 32.89 0.00 436597.46 44346.03 1055566.51 00:18:15.115 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:15.115 Job: Nvme6n1 ended in about 1.95 seconds with error 00:18:15.115 Verification LBA range: start 0x0 length 0x400 00:18:15.115 Nvme6n1 : 1.95 110.41 6.90 32.87 0.00 421953.97 37792.43 1118481.07 00:18:15.115 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:15.115 Job: Nvme7n1 ended in about 1.95 seconds with error 00:18:15.115 Verification LBA range: start 0x0 length 0x400 00:18:15.115 Nvme7n1 : 1.95 115.99 7.25 32.85 0.00 401309.70 3604.48 1111490.56 00:18:15.116 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:15.116 Job: Nvme8n1 ended in about 1.95 seconds with error 00:18:15.116 Verification LBA range: start 0x0 length 0x400 00:18:15.116 Nvme8n1 : 1.95 131.30 8.21 32.83 0.00 360036.01 4669.44 1097509.55 00:18:15.116 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:15.116 Job: Nvme9n1 ended in about 1.95 seconds with error 00:18:15.116 Verification LBA range: start 0x0 length 0x400 00:18:15.116 Nvme9n1 : 1.95 114.82 7.18 32.81 0.00 396291.03 48933.55 1083528.53 00:18:15.116 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:15.116 Job: Nvme10n1 ended in about 1.88 seconds with error 00:18:15.116 Verification LBA range: start 0x0 length 0x400 00:18:15.116 Nvme10n1 : 1.88 67.98 4.25 33.99 0.00 563894.04 88255.15 1062557.01 00:18:15.116 [2024-11-20T17:03:33.652Z] =================================================================================================================== 00:18:15.116 [2024-11-20T17:03:33.652Z] Total : 1102.78 68.92 329.95 0.00 423009.48 3604.48 1118481.07 00:18:15.116 [2024-11-20 18:03:33.507417] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:15.116 [2024-11-20 18:03:33.518802] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:15.116 [2024-11-20 18:03:33.518821] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:15.116 [2024-11-20 18:03:33.518828] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170b9040 00:18:15.116 [2024-11-20 18:03:33.519113] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:15.116 [2024-11-20 18:03:33.519122] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:15.116 [2024-11-20 18:03:33.519128] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170b0940 00:18:15.116 [2024-11-20 18:03:33.519805] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:15.116 [2024-11-20 18:03:33.519815] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:15.116 [2024-11-20 18:03:33.519821] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001706e8c0 00:18:15.116 [2024-11-20 18:03:33.519992] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:15.116 [2024-11-20 18:03:33.520000] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:15.116 [2024-11-20 18:03:33.520006] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017098000 00:18:15.376 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3573982 00:18:15.376 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:18:15.376 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3573982 00:18:15.376 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:18:15.376 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.376 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:18:15.376 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.376 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3573982 00:18:15.946 [2024-11-20 18:03:34.402122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.946 [2024-11-20 18:03:34.402146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:18:15.946 [2024-11-20 18:03:34.403850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.946 [2024-11-20 18:03:34.403894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:15.946 [2024-11-20 18:03:34.405983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.946 [2024-11-20 18:03:34.405994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:18:15.946 [2024-11-20 18:03:34.407746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.946 [2024-11-20 18:03:34.407779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:18:15.946 [2024-11-20 18:03:34.409916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.946 [2024-11-20 18:03:34.409958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:18:15.946 [2024-11-20 18:03:34.409979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:18:15.946 [2024-11-20 18:03:34.410000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:18:15.946 [2024-11-20 18:03:34.410022] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:18:15.946 [2024-11-20 18:03:34.410048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:18:15.946 [2024-11-20 18:03:34.410182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:15.946 [2024-11-20 18:03:34.410204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:15.946 [2024-11-20 18:03:34.410223] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:18:15.946 [2024-11-20 18:03:34.410245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:15.946 [2024-11-20 18:03:34.410274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:18:15.946 [2024-11-20 18:03:34.410293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:18:15.946 [2024-11-20 18:03:34.410313] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:18:15.946 [2024-11-20 18:03:34.410333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:18:15.946 [2024-11-20 18:03:34.410364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:18:15.946 [2024-11-20 18:03:34.410384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:18:15.946 [2024-11-20 18:03:34.410404] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:18:15.946 [2024-11-20 18:03:34.410424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:18:15.946 [2024-11-20 18:03:34.410452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:18:15.946 [2024-11-20 18:03:34.410471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:18:15.946 [2024-11-20 18:03:34.410491] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:18:15.946 [2024-11-20 18:03:34.410512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:18:15.946 [2024-11-20 18:03:34.472563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:15.946 [2024-11-20 18:03:34.472605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:18:15.946 [2024-11-20 18:03:34.472689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:18:15.946 [2024-11-20 18:03:34.472709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:18:15.946 [2024-11-20 18:03:34.472730] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:18:15.946 [2024-11-20 18:03:34.472752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:18:16.207 [2024-11-20 18:03:34.523354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:16.207 [2024-11-20 18:03:34.523401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:18:16.207 [2024-11-20 18:03:34.525190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:16.207 [2024-11-20 18:03:34.525201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:18:16.207 [2024-11-20 18:03:34.526993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:16.207 [2024-11-20 18:03:34.527025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:18:16.207 [2024-11-20 18:03:34.528744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:16.207 [2024-11-20 18:03:34.528776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:18:16.207 [2024-11-20 18:03:34.528797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:18:16.207 [2024-11-20 18:03:34.528816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:18:16.207 [2024-11-20 18:03:34.528836] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:18:16.207 [2024-11-20 18:03:34.528858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:18:16.207 [2024-11-20 18:03:34.528899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:18:16.207 [2024-11-20 18:03:34.528919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:18:16.207 [2024-11-20 18:03:34.528938] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:18:16.207 [2024-11-20 18:03:34.528959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:18:16.207 [2024-11-20 18:03:34.529076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:18:16.207 [2024-11-20 18:03:34.529099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:18:16.207 [2024-11-20 18:03:34.529119] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:18:16.207 [2024-11-20 18:03:34.529141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:18:16.207 [2024-11-20 18:03:34.529170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:18:16.207 [2024-11-20 18:03:34.529189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:18:16.207 [2024-11-20 18:03:34.529208] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:18:16.207 [2024-11-20 18:03:34.529229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:16.207 rmmod nvme_rdma 00:18:16.207 rmmod nvme_fabrics 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3573818 ']' 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3573818 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3573818 ']' 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3573818 00:18:16.207 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3573818) - No such process 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3573818 is not found' 00:18:16.207 Process with pid 3573818 is not found 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:16.207 00:18:16.207 real 0m5.480s 00:18:16.207 user 0m15.829s 00:18:16.207 sys 0m1.089s 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.207 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:16.207 ************************************ 00:18:16.207 END TEST nvmf_shutdown_tc3 00:18:16.207 ************************************ 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:16.469 ************************************ 00:18:16.469 START TEST nvmf_shutdown_tc4 00:18:16.469 ************************************ 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:18:16.469 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:18:16.469 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:18:16.469 Found net devices under 0000:98:00.0: mlx_0_0 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:16.469 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:18:16.470 Found net devices under 0000:98:00.1: mlx_0_1 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:16.470 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:16.470 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:18:16.470 altname enp152s0f0np0 00:18:16.470 altname ens817f0np0 00:18:16.470 inet 192.168.100.8/24 scope global mlx_0_0 00:18:16.470 valid_lft forever preferred_lft forever 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:16.470 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:16.470 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:18:16.470 altname enp152s0f1np1 00:18:16.470 altname ens817f1np1 00:18:16.470 inet 192.168.100.9/24 scope global mlx_0_1 00:18:16.470 valid_lft forever preferred_lft forever 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:16.470 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:16.730 192.168.100.9' 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:16.730 192.168.100.9' 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:16.730 192.168.100.9' 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.730 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:16.731 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3574998 00:18:16.731 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3574998 00:18:16.731 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:16.731 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3574998 ']' 00:18:16.731 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.731 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.731 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.731 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.731 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:16.731 [2024-11-20 18:03:35.117600] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:18:16.731 [2024-11-20 18:03:35.117663] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.731 [2024-11-20 18:03:35.213488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.731 [2024-11-20 18:03:35.247183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.731 [2024-11-20 18:03:35.247214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.731 [2024-11-20 18:03:35.247221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.731 [2024-11-20 18:03:35.247226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.731 [2024-11-20 18:03:35.247230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.731 [2024-11-20 18:03:35.248781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.731 [2024-11-20 18:03:35.248936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.731 [2024-11-20 18:03:35.249263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.731 [2024-11-20 18:03:35.249263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:17.732 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.732 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:18:17.732 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.732 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.732 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:17.732 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.732 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:17.732 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.732 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:17.732 [2024-11-20 18:03:36.002239] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21b0190/0x21b4680) succeed. 00:18:17.732 [2024-11-20 18:03:36.011789] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21b1820/0x21f5d20) succeed. 00:18:17.732 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.732 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:17.732 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:17.732 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.732 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:17.732 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:17.732 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:17.732 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:17.732 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:17.732 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.733 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:17.733 Malloc1 00:18:17.733 [2024-11-20 18:03:36.217907] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:17.733 Malloc2 00:18:18.002 Malloc3 00:18:18.002 Malloc4 00:18:18.002 Malloc5 00:18:18.002 Malloc6 00:18:18.002 Malloc7 00:18:18.002 Malloc8 00:18:18.002 Malloc9 00:18:18.266 Malloc10 00:18:18.266 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.266 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:18.266 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.266 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:18.266 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3575385 00:18:18.266 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:18:18.266 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:18:18.266 [2024-11-20 18:03:36.715937] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3574998 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3574998 ']' 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3574998 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3574998 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3574998' 00:18:23.544 killing process with pid 3574998 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3574998 00:18:23.544 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3574998 00:18:23.544 NVMe io qpair process completion error 00:18:23.544 NVMe io qpair process completion error 00:18:23.544 NVMe io qpair process completion error 00:18:23.544 NVMe io qpair process completion error 00:18:23.544 NVMe io qpair process completion error 00:18:23.544 NVMe io qpair process completion error 00:18:23.544 NVMe io qpair process completion error 00:18:23.804 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 starting I/O failed: -6 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.377 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 [2024-11-20 18:03:42.804918] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.378 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 [2024-11-20 18:03:42.817075] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 starting I/O failed: -6 00:18:24.379 [2024-11-20 18:03:42.828096] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.379 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 starting I/O failed: -6 00:18:24.380 [2024-11-20 18:03:42.839110] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.380 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 starting I/O failed: -6 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 starting I/O failed: -6 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 starting I/O failed: -6 00:18:24.381 [2024-11-20 18:03:42.849572] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 starting I/O failed: -6 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 starting I/O failed: -6 00:18:24.381 [2024-11-20 18:03:42.860909] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.381 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 Write completed with error (sct=0, sc=8) 00:18:24.382 [2024-11-20 18:03:42.871554] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed 00:18:24.382 NVMe io qpair process completion error 00:18:24.382 NVMe io qpair process completion error 00:18:24.382 NVMe io qpair process completion error 00:18:24.642 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3575385 00:18:24.642 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:18:24.642 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3575385 00:18:24.642 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:18:24.642 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.642 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:18:24.642 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.642 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3575385 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 [2024-11-20 18:03:43.875815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.582 [2024-11-20 18:03:43.875905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 [2024-11-20 18:03:43.878394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.582 [2024-11-20 18:03:43.878440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.582 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 [2024-11-20 18:03:43.889852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.583 [2024-11-20 18:03:43.889934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.583 Write completed with error (sct=0, sc=8) 00:18:25.584 [2024-11-20 18:03:43.893022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.584 [2024-11-20 18:03:43.893070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 [2024-11-20 18:03:43.895709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 [2024-11-20 18:03:43.895747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 [2024-11-20 18:03:43.902556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.584 [2024-11-20 18:03:43.902618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 [2024-11-20 18:03:43.905236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.584 [2024-11-20 18:03:43.905273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.584 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 [2024-11-20 18:03:43.907779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.585 [2024-11-20 18:03:43.907812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 Write completed with error (sct=0, sc=8) 00:18:25.585 [2024-11-20 18:03:43.910289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.585 [2024-11-20 18:03:43.910324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:18:25.585 [2024-11-20 18:03:43.950969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.585 [2024-11-20 18:03:43.951009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:18:25.585 Initializing NVMe Controllers 00:18:25.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:18:25.585 Controller IO queue size 128, less than required. 00:18:25.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:18:25.585 Controller IO queue size 128, less than required. 00:18:25.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:18:25.585 Controller IO queue size 128, less than required. 00:18:25.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:25.585 Controller IO queue size 128, less than required. 00:18:25.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:18:25.585 Controller IO queue size 128, less than required. 00:18:25.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:18:25.585 Controller IO queue size 128, less than required. 00:18:25.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:18:25.585 Controller IO queue size 128, less than required. 00:18:25.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:18:25.585 Controller IO queue size 128, less than required. 00:18:25.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:18:25.585 Controller IO queue size 128, less than required. 00:18:25.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:18:25.585 Controller IO queue size 128, less than required. 00:18:25.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:18:25.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:18:25.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:18:25.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:25.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:18:25.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:18:25.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:18:25.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:18:25.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:18:25.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:18:25.585 Initialization complete. Launching workers. 00:18:25.585 ======================================================== 00:18:25.585 Latency(us) 00:18:25.585 Device Information : IOPS MiB/s Average min max 00:18:25.585 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1830.06 78.64 69025.69 93.45 1193303.69 00:18:25.585 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1830.74 78.66 69132.80 87.42 1194925.41 00:18:25.585 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1857.14 79.80 80546.00 92.60 2240529.40 00:18:25.585 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1819.57 78.18 69580.34 93.63 1235427.84 00:18:25.585 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1832.77 78.75 69175.77 93.30 1230969.17 00:18:25.585 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1857.99 79.84 80505.77 89.35 2194217.54 00:18:25.585 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1817.71 78.10 69781.35 88.38 1251800.21 00:18:25.586 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1830.40 78.65 69372.66 88.73 1245144.02 00:18:25.586 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1833.45 78.78 69344.25 93.78 1253646.51 00:18:25.586 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1856.97 79.79 80494.24 91.15 2075758.57 00:18:25.586 ======================================================== 00:18:25.586 Total : 18366.80 789.20 72733.17 87.42 2240529.40 00:18:25.586 00:18:25.586 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:25.586 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:25.586 rmmod nvme_rdma 00:18:25.586 rmmod nvme_fabrics 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3574998 ']' 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3574998 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3574998 ']' 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3574998 00:18:25.586 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3574998) - No such process 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3574998 is not found' 00:18:25.586 Process with pid 3574998 is not found 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:25.586 00:18:25.586 real 0m9.227s 00:18:25.586 user 0m33.999s 00:18:25.586 sys 0m1.105s 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:25.586 ************************************ 00:18:25.586 END TEST nvmf_shutdown_tc4 00:18:25.586 ************************************ 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:18:25.586 00:18:25.586 real 0m36.021s 00:18:25.586 user 1m44.192s 00:18:25.586 sys 0m10.825s 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:25.586 ************************************ 00:18:25.586 END TEST nvmf_shutdown 00:18:25.586 ************************************ 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.586 18:03:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:25.847 ************************************ 00:18:25.847 START TEST nvmf_nsid 00:18:25.847 ************************************ 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:18:25.847 * Looking for test storage... 00:18:25.847 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:25.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.847 --rc genhtml_branch_coverage=1 00:18:25.847 --rc genhtml_function_coverage=1 00:18:25.847 --rc genhtml_legend=1 00:18:25.847 --rc geninfo_all_blocks=1 00:18:25.847 --rc geninfo_unexecuted_blocks=1 00:18:25.847 00:18:25.847 ' 00:18:25.847 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:25.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.847 --rc genhtml_branch_coverage=1 00:18:25.847 --rc genhtml_function_coverage=1 00:18:25.847 --rc genhtml_legend=1 00:18:25.847 --rc geninfo_all_blocks=1 00:18:25.848 --rc geninfo_unexecuted_blocks=1 00:18:25.848 00:18:25.848 ' 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:25.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.848 --rc genhtml_branch_coverage=1 00:18:25.848 --rc genhtml_function_coverage=1 00:18:25.848 --rc genhtml_legend=1 00:18:25.848 --rc geninfo_all_blocks=1 00:18:25.848 --rc geninfo_unexecuted_blocks=1 00:18:25.848 00:18:25.848 ' 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:25.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.848 --rc genhtml_branch_coverage=1 00:18:25.848 --rc genhtml_function_coverage=1 00:18:25.848 --rc genhtml_legend=1 00:18:25.848 --rc geninfo_all_blocks=1 00:18:25.848 --rc geninfo_unexecuted_blocks=1 00:18:25.848 00:18:25.848 ' 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.848 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:26.109 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:18:34.247 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:18:34.247 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:34.247 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:18:34.248 Found net devices under 0000:98:00.0: mlx_0_0 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:18:34.248 Found net devices under 0000:98:00.1: mlx_0_1 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:34.248 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.248 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:18:34.248 altname enp152s0f0np0 00:18:34.248 altname ens817f0np0 00:18:34.248 inet 192.168.100.8/24 scope global mlx_0_0 00:18:34.248 valid_lft forever preferred_lft forever 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:34.248 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.248 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:18:34.248 altname enp152s0f1np1 00:18:34.248 altname ens817f1np1 00:18:34.248 inet 192.168.100.9/24 scope global mlx_0_1 00:18:34.248 valid_lft forever preferred_lft forever 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:34.248 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:34.249 192.168.100.9' 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:34.249 192.168.100.9' 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:34.249 192.168.100.9' 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3581085 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3581085 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3581085 ']' 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.249 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:34.511 [2024-11-20 18:03:52.790998] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:18:34.511 [2024-11-20 18:03:52.791066] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.511 [2024-11-20 18:03:52.871578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.511 [2024-11-20 18:03:52.906094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.511 [2024-11-20 18:03:52.906127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.511 [2024-11-20 18:03:52.906135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.511 [2024-11-20 18:03:52.906141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.511 [2024-11-20 18:03:52.906147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.511 [2024-11-20 18:03:52.906743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.082 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.082 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:18:35.083 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.083 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.083 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3581295 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ce5150e4-7226-4f13-9395-34ac9e9b8a33 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=836f61de-7623-4014-8598-5e655d0f9347 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ebc9d3a5-4b73-4e71-a26d-43f4445d7653 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:35.344 null0 00:18:35.344 null1 00:18:35.344 [2024-11-20 18:03:53.678544] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:18:35.344 [2024-11-20 18:03:53.678595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581295 ] 00:18:35.344 null2 00:18:35.344 [2024-11-20 18:03:53.719019] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x116e830/0x117efc0) succeed. 00:18:35.344 [2024-11-20 18:03:53.732203] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x116fce0/0x11ff000) succeed. 00:18:35.344 [2024-11-20 18:03:53.773937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.344 [2024-11-20 18:03:53.785915] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:35.344 [2024-11-20 18:03:53.809829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3581295 /var/tmp/tgt2.sock 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3581295 ']' 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:18:35.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.344 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:35.605 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.605 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:18:35.605 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:18:35.866 [2024-11-20 18:03:54.342387] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18985c0/0x16abdb0) succeed. 00:18:35.866 [2024-11-20 18:03:54.357103] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x169b140/0x16ed450) succeed. 00:18:36.128 [2024-11-20 18:03:54.403330] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:36.128 nvme0n1 nvme0n2 00:18:36.128 nvme1n1 00:18:36.128 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:18:36.128 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:18:36.128 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:48.357 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ce5150e4-7226-4f13-9395-34ac9e9b8a33 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ce5150e472264f13939534ac9e9b8a33 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CE5150E472264F13939534AC9E9B8A33 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CE5150E472264F13939534AC9E9B8A33 == \C\E\5\1\5\0\E\4\7\2\2\6\4\F\1\3\9\3\9\5\3\4\A\C\9\E\9\B\8\A\3\3 ]] 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 836f61de-7623-4014-8598-5e655d0f9347 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=836f61de7623401485985e655d0f9347 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 836F61DE7623401485985E655D0F9347 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 836F61DE7623401485985E655D0F9347 == \8\3\6\F\6\1\D\E\7\6\2\3\4\0\1\4\8\5\9\8\5\E\6\5\5\D\0\F\9\3\4\7 ]] 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ebc9d3a5-4b73-4e71-a26d-43f4445d7653 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ebc9d3a54b734e71a26d43f4445d7653 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EBC9D3A54B734E71A26D43F4445D7653 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ EBC9D3A54B734E71A26D43F4445D7653 == \E\B\C\9\D\3\A\5\4\B\7\3\4\E\7\1\A\2\6\D\4\3\F\4\4\4\5\D\7\6\5\3 ]] 00:18:48.358 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:18:58.357 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:18:58.357 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:18:58.357 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3581295 00:18:58.357 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3581295 ']' 00:18:58.357 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3581295 00:18:58.357 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:58.357 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.358 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3581295 00:18:58.358 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:58.358 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:58.358 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3581295' 00:18:58.358 killing process with pid 3581295 00:18:58.358 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3581295 00:18:58.358 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3581295 00:18:58.618 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:18:58.618 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:58.618 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:18:58.618 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:58.618 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:58.618 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:18:58.618 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:58.618 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:58.618 rmmod nvme_rdma 00:18:58.618 rmmod nvme_fabrics 00:18:58.618 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:58.618 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3581085 ']' 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3581085 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3581085 ']' 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3581085 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3581085 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3581085' 00:18:58.618 killing process with pid 3581085 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3581085 00:18:58.618 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3581085 00:18:58.879 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:58.879 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:58.879 00:18:58.879 real 0m33.085s 00:18:58.879 user 0m49.921s 00:18:58.879 sys 0m7.642s 00:18:58.879 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.879 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:58.879 ************************************ 00:18:58.879 END TEST nvmf_nsid 00:18:58.879 ************************************ 00:18:58.879 18:04:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:58.879 00:18:58.879 real 9m1.567s 00:18:58.879 user 20m48.143s 00:18:58.879 sys 2m29.499s 00:18:58.879 18:04:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.879 18:04:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.879 ************************************ 00:18:58.879 END TEST nvmf_target_extra 00:18:58.879 ************************************ 00:18:58.879 18:04:17 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:18:58.879 18:04:17 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.879 18:04:17 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.879 18:04:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:58.879 ************************************ 00:18:58.879 START TEST nvmf_host 00:18:58.879 ************************************ 00:18:58.879 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:18:59.141 * Looking for test storage... 00:18:59.141 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:59.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.141 --rc genhtml_branch_coverage=1 00:18:59.141 --rc genhtml_function_coverage=1 00:18:59.141 --rc genhtml_legend=1 00:18:59.141 --rc geninfo_all_blocks=1 00:18:59.141 --rc geninfo_unexecuted_blocks=1 00:18:59.141 00:18:59.141 ' 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:59.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.141 --rc genhtml_branch_coverage=1 00:18:59.141 --rc genhtml_function_coverage=1 00:18:59.141 --rc genhtml_legend=1 00:18:59.141 --rc geninfo_all_blocks=1 00:18:59.141 --rc geninfo_unexecuted_blocks=1 00:18:59.141 00:18:59.141 ' 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:59.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.141 --rc genhtml_branch_coverage=1 00:18:59.141 --rc genhtml_function_coverage=1 00:18:59.141 --rc genhtml_legend=1 00:18:59.141 --rc geninfo_all_blocks=1 00:18:59.141 --rc geninfo_unexecuted_blocks=1 00:18:59.141 00:18:59.141 ' 00:18:59.141 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:59.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.141 --rc genhtml_branch_coverage=1 00:18:59.142 --rc genhtml_function_coverage=1 00:18:59.142 --rc genhtml_legend=1 00:18:59.142 --rc geninfo_all_blocks=1 00:18:59.142 --rc geninfo_unexecuted_blocks=1 00:18:59.142 00:18:59.142 ' 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.142 ************************************ 00:18:59.142 START TEST nvmf_multicontroller 00:18:59.142 ************************************ 00:18:59.142 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:59.404 * Looking for test storage... 00:18:59.404 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.404 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:59.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.405 --rc genhtml_branch_coverage=1 00:18:59.405 --rc genhtml_function_coverage=1 00:18:59.405 --rc genhtml_legend=1 00:18:59.405 --rc geninfo_all_blocks=1 00:18:59.405 --rc geninfo_unexecuted_blocks=1 00:18:59.405 00:18:59.405 ' 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:59.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.405 --rc genhtml_branch_coverage=1 00:18:59.405 --rc genhtml_function_coverage=1 00:18:59.405 --rc genhtml_legend=1 00:18:59.405 --rc geninfo_all_blocks=1 00:18:59.405 --rc geninfo_unexecuted_blocks=1 00:18:59.405 00:18:59.405 ' 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:59.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.405 --rc genhtml_branch_coverage=1 00:18:59.405 --rc genhtml_function_coverage=1 00:18:59.405 --rc genhtml_legend=1 00:18:59.405 --rc geninfo_all_blocks=1 00:18:59.405 --rc geninfo_unexecuted_blocks=1 00:18:59.405 00:18:59.405 ' 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:59.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.405 --rc genhtml_branch_coverage=1 00:18:59.405 --rc genhtml_function_coverage=1 00:18:59.405 --rc genhtml_legend=1 00:18:59.405 --rc geninfo_all_blocks=1 00:18:59.405 --rc geninfo_unexecuted_blocks=1 00:18:59.405 00:18:59.405 ' 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.405 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:18:59.405 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:59.405 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:59.406 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:18:59.406 00:18:59.406 real 0m0.194s 00:18:59.406 user 0m0.107s 00:18:59.406 sys 0m0.103s 00:18:59.406 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.406 18:04:17 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:59.406 ************************************ 00:18:59.406 END TEST nvmf_multicontroller 00:18:59.406 ************************************ 00:18:59.406 18:04:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:59.406 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:59.406 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.406 18:04:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.406 ************************************ 00:18:59.406 START TEST nvmf_aer 00:18:59.406 ************************************ 00:18:59.406 18:04:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:59.667 * Looking for test storage... 00:18:59.667 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:59.667 18:04:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:59.667 18:04:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:18:59.667 18:04:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:59.667 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:59.667 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.667 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.667 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.667 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.667 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:59.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.668 --rc genhtml_branch_coverage=1 00:18:59.668 --rc genhtml_function_coverage=1 00:18:59.668 --rc genhtml_legend=1 00:18:59.668 --rc geninfo_all_blocks=1 00:18:59.668 --rc geninfo_unexecuted_blocks=1 00:18:59.668 00:18:59.668 ' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:59.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.668 --rc genhtml_branch_coverage=1 00:18:59.668 --rc genhtml_function_coverage=1 00:18:59.668 --rc genhtml_legend=1 00:18:59.668 --rc geninfo_all_blocks=1 00:18:59.668 --rc geninfo_unexecuted_blocks=1 00:18:59.668 00:18:59.668 ' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:59.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.668 --rc genhtml_branch_coverage=1 00:18:59.668 --rc genhtml_function_coverage=1 00:18:59.668 --rc genhtml_legend=1 00:18:59.668 --rc geninfo_all_blocks=1 00:18:59.668 --rc geninfo_unexecuted_blocks=1 00:18:59.668 00:18:59.668 ' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:59.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.668 --rc genhtml_branch_coverage=1 00:18:59.668 --rc genhtml_function_coverage=1 00:18:59.668 --rc genhtml_legend=1 00:18:59.668 --rc geninfo_all_blocks=1 00:18:59.668 --rc geninfo_unexecuted_blocks=1 00:18:59.668 00:18:59.668 ' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.668 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:18:59.668 18:04:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:07.809 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:07.809 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:07.809 Found net devices under 0000:98:00.0: mlx_0_0 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:07.809 Found net devices under 0000:98:00.1: mlx_0_1 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:07.809 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:07.810 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.810 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:19:07.810 altname enp152s0f0np0 00:19:07.810 altname ens817f0np0 00:19:07.810 inet 192.168.100.8/24 scope global mlx_0_0 00:19:07.810 valid_lft forever preferred_lft forever 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:07.810 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.810 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:19:07.810 altname enp152s0f1np1 00:19:07.810 altname ens817f1np1 00:19:07.810 inet 192.168.100.9/24 scope global mlx_0_1 00:19:07.810 valid_lft forever preferred_lft forever 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:07.810 18:04:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:07.810 192.168.100.9' 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:07.810 192.168.100.9' 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:07.810 192.168.100.9' 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:07.810 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3590328 00:19:07.811 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3590328 00:19:07.811 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:07.811 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3590328 ']' 00:19:07.811 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.811 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.811 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.811 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.811 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:07.811 [2024-11-20 18:04:26.152550] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:07.811 [2024-11-20 18:04:26.152617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.811 [2024-11-20 18:04:26.238707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.811 [2024-11-20 18:04:26.281595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.811 [2024-11-20 18:04:26.281635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.811 [2024-11-20 18:04:26.281643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.811 [2024-11-20 18:04:26.281650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.811 [2024-11-20 18:04:26.281656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.811 [2024-11-20 18:04:26.283285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.811 [2024-11-20 18:04:26.283400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.811 [2024-11-20 18:04:26.283558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.811 [2024-11-20 18:04:26.283558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.751 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.751 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:19:08.751 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.751 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.751 18:04:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:08.751 [2024-11-20 18:04:27.041899] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15dcf30/0x15e1420) succeed. 00:19:08.751 [2024-11-20 18:04:27.056648] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15de5c0/0x1622ac0) succeed. 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:08.751 Malloc0 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:08.751 [2024-11-20 18:04:27.233091] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:08.751 [ 00:19:08.751 { 00:19:08.751 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:08.751 "subtype": "Discovery", 00:19:08.751 "listen_addresses": [], 00:19:08.751 "allow_any_host": true, 00:19:08.751 "hosts": [] 00:19:08.751 }, 00:19:08.751 { 00:19:08.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.751 "subtype": "NVMe", 00:19:08.751 "listen_addresses": [ 00:19:08.751 { 00:19:08.751 "trtype": "RDMA", 00:19:08.751 "adrfam": "IPv4", 00:19:08.751 "traddr": "192.168.100.8", 00:19:08.751 "trsvcid": "4420" 00:19:08.751 } 00:19:08.751 ], 00:19:08.751 "allow_any_host": true, 00:19:08.751 "hosts": [], 00:19:08.751 "serial_number": "SPDK00000000000001", 00:19:08.751 "model_number": "SPDK bdev Controller", 00:19:08.751 "max_namespaces": 2, 00:19:08.751 "min_cntlid": 1, 00:19:08.751 "max_cntlid": 65519, 00:19:08.751 "namespaces": [ 00:19:08.751 { 00:19:08.751 "nsid": 1, 00:19:08.751 "bdev_name": "Malloc0", 00:19:08.751 "name": "Malloc0", 00:19:08.751 "nguid": "30DF6719CC1F48329A2916FD3B9C4143", 00:19:08.751 "uuid": "30df6719-cc1f-4832-9a29-16fd3b9c4143" 00:19:08.751 } 00:19:08.751 ] 00:19:08.751 } 00:19:08.751 ] 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3590601 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:08.751 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:09.012 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:09.012 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:09.012 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:19:09.012 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:09.012 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:09.012 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:19:09.012 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:19:09.012 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:09.272 Malloc1 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:09.272 [ 00:19:09.272 { 00:19:09.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:09.272 "subtype": "Discovery", 00:19:09.272 "listen_addresses": [], 00:19:09.272 "allow_any_host": true, 00:19:09.272 "hosts": [] 00:19:09.272 }, 00:19:09.272 { 00:19:09.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.272 "subtype": "NVMe", 00:19:09.272 "listen_addresses": [ 00:19:09.272 { 00:19:09.272 "trtype": "RDMA", 00:19:09.272 "adrfam": "IPv4", 00:19:09.272 "traddr": "192.168.100.8", 00:19:09.272 "trsvcid": "4420" 00:19:09.272 } 00:19:09.272 ], 00:19:09.272 "allow_any_host": true, 00:19:09.272 "hosts": [], 00:19:09.272 "serial_number": "SPDK00000000000001", 00:19:09.272 "model_number": "SPDK bdev Controller", 00:19:09.272 "max_namespaces": 2, 00:19:09.272 "min_cntlid": 1, 00:19:09.272 "max_cntlid": 65519, 00:19:09.272 "namespaces": [ 00:19:09.272 { 00:19:09.272 "nsid": 1, 00:19:09.272 "bdev_name": "Malloc0", 00:19:09.272 "name": "Malloc0", 00:19:09.272 "nguid": "30DF6719CC1F48329A2916FD3B9C4143", 00:19:09.272 "uuid": "30df6719-cc1f-4832-9a29-16fd3b9c4143" 00:19:09.272 }, 00:19:09.272 { 00:19:09.272 "nsid": 2, 00:19:09.272 "bdev_name": "Malloc1", 00:19:09.272 "name": "Malloc1", 00:19:09.272 "nguid": "17395708A9C94B178156930938993E72", 00:19:09.272 "uuid": "17395708-a9c9-4b17-8156-930938993e72" 00:19:09.272 } 00:19:09.272 ] 00:19:09.272 } 00:19:09.272 ] 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3590601 00:19:09.272 Asynchronous Event Request test 00:19:09.272 Attaching to 192.168.100.8 00:19:09.272 Attached to 192.168.100.8 00:19:09.272 Registering asynchronous event callbacks... 00:19:09.272 Starting namespace attribute notice tests for all controllers... 00:19:09.272 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:09.272 aer_cb - Changed Namespace 00:19:09.272 Cleaning up... 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:09.272 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:09.273 rmmod nvme_rdma 00:19:09.273 rmmod nvme_fabrics 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3590328 ']' 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3590328 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3590328 ']' 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3590328 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3590328 00:19:09.273 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.533 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.533 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3590328' 00:19:09.533 killing process with pid 3590328 00:19:09.533 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3590328 00:19:09.533 18:04:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3590328 00:19:09.533 18:04:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:09.533 18:04:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:09.533 00:19:09.533 real 0m10.126s 00:19:09.533 user 0m9.024s 00:19:09.533 sys 0m6.628s 00:19:09.533 18:04:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.533 18:04:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:09.533 ************************************ 00:19:09.533 END TEST nvmf_aer 00:19:09.533 ************************************ 00:19:09.533 18:04:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:19:09.533 18:04:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:09.533 18:04:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.533 18:04:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.796 ************************************ 00:19:09.796 START TEST nvmf_async_init 00:19:09.796 ************************************ 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:19:09.796 * Looking for test storage... 00:19:09.796 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:09.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.796 --rc genhtml_branch_coverage=1 00:19:09.796 --rc genhtml_function_coverage=1 00:19:09.796 --rc genhtml_legend=1 00:19:09.796 --rc geninfo_all_blocks=1 00:19:09.796 --rc geninfo_unexecuted_blocks=1 00:19:09.796 00:19:09.796 ' 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:09.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.796 --rc genhtml_branch_coverage=1 00:19:09.796 --rc genhtml_function_coverage=1 00:19:09.796 --rc genhtml_legend=1 00:19:09.796 --rc geninfo_all_blocks=1 00:19:09.796 --rc geninfo_unexecuted_blocks=1 00:19:09.796 00:19:09.796 ' 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:09.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.796 --rc genhtml_branch_coverage=1 00:19:09.796 --rc genhtml_function_coverage=1 00:19:09.796 --rc genhtml_legend=1 00:19:09.796 --rc geninfo_all_blocks=1 00:19:09.796 --rc geninfo_unexecuted_blocks=1 00:19:09.796 00:19:09.796 ' 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:09.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.796 --rc genhtml_branch_coverage=1 00:19:09.796 --rc genhtml_function_coverage=1 00:19:09.796 --rc genhtml_legend=1 00:19:09.796 --rc geninfo_all_blocks=1 00:19:09.796 --rc geninfo_unexecuted_blocks=1 00:19:09.796 00:19:09.796 ' 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:19:09.796 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:09.797 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7198ad3ecadb4f2eb55e1efeadda2e85 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:09.797 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.058 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.058 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.058 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:10.058 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:10.058 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:19:10.058 18:04:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:18.197 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:18.197 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:18.198 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:18.198 Found net devices under 0000:98:00.0: mlx_0_0 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:18.198 Found net devices under 0000:98:00.1: mlx_0_1 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:18.198 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:18.198 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:19:18.198 altname enp152s0f0np0 00:19:18.198 altname ens817f0np0 00:19:18.198 inet 192.168.100.8/24 scope global mlx_0_0 00:19:18.198 valid_lft forever preferred_lft forever 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:18.198 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:18.198 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:19:18.198 altname enp152s0f1np1 00:19:18.198 altname ens817f1np1 00:19:18.198 inet 192.168.100.9/24 scope global mlx_0_1 00:19:18.198 valid_lft forever preferred_lft forever 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:18.198 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:18.199 192.168.100.9' 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:18.199 192.168.100.9' 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:18.199 192.168.100.9' 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3595027 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3595027 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3595027 ']' 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.199 18:04:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:18.459 [2024-11-20 18:04:36.731202] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:18.459 [2024-11-20 18:04:36.731254] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.459 [2024-11-20 18:04:36.810665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.459 [2024-11-20 18:04:36.845685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.459 [2024-11-20 18:04:36.845718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.459 [2024-11-20 18:04:36.845725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.459 [2024-11-20 18:04:36.845732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.459 [2024-11-20 18:04:36.845738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.459 [2024-11-20 18:04:36.846294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.029 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.029 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:19:19.029 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.029 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.029 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.290 [2024-11-20 18:04:37.604793] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d10c80/0x1d15170) succeed. 00:19:19.290 [2024-11-20 18:04:37.616844] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d12130/0x1d56810) succeed. 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.290 null0 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7198ad3ecadb4f2eb55e1efeadda2e85 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.290 [2024-11-20 18:04:37.706513] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.290 nvme0n1 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.290 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.290 [ 00:19:19.290 { 00:19:19.290 "name": "nvme0n1", 00:19:19.290 "aliases": [ 00:19:19.290 "7198ad3e-cadb-4f2e-b55e-1efeadda2e85" 00:19:19.290 ], 00:19:19.290 "product_name": "NVMe disk", 00:19:19.290 "block_size": 512, 00:19:19.290 "num_blocks": 2097152, 00:19:19.290 "uuid": "7198ad3e-cadb-4f2e-b55e-1efeadda2e85", 00:19:19.290 "numa_id": 1, 00:19:19.291 "assigned_rate_limits": { 00:19:19.291 "rw_ios_per_sec": 0, 00:19:19.291 "rw_mbytes_per_sec": 0, 00:19:19.291 "r_mbytes_per_sec": 0, 00:19:19.291 "w_mbytes_per_sec": 0 00:19:19.291 }, 00:19:19.291 "claimed": false, 00:19:19.291 "zoned": false, 00:19:19.291 "supported_io_types": { 00:19:19.291 "read": true, 00:19:19.291 "write": true, 00:19:19.291 "unmap": false, 00:19:19.291 "flush": true, 00:19:19.291 "reset": true, 00:19:19.291 "nvme_admin": true, 00:19:19.291 "nvme_io": true, 00:19:19.291 "nvme_io_md": false, 00:19:19.291 "write_zeroes": true, 00:19:19.291 "zcopy": false, 00:19:19.291 "get_zone_info": false, 00:19:19.291 "zone_management": false, 00:19:19.291 "zone_append": false, 00:19:19.291 "compare": true, 00:19:19.291 "compare_and_write": true, 00:19:19.291 "abort": true, 00:19:19.291 "seek_hole": false, 00:19:19.291 "seek_data": false, 00:19:19.291 "copy": true, 00:19:19.291 "nvme_iov_md": false 00:19:19.291 }, 00:19:19.291 "memory_domains": [ 00:19:19.291 { 00:19:19.291 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:19.291 "dma_device_type": 0 00:19:19.291 } 00:19:19.291 ], 00:19:19.291 "driver_specific": { 00:19:19.291 "nvme": [ 00:19:19.291 { 00:19:19.291 "trid": { 00:19:19.291 "trtype": "RDMA", 00:19:19.291 "adrfam": "IPv4", 00:19:19.291 "traddr": "192.168.100.8", 00:19:19.291 "trsvcid": "4420", 00:19:19.291 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:19.291 }, 00:19:19.291 "ctrlr_data": { 00:19:19.291 "cntlid": 1, 00:19:19.291 "vendor_id": "0x8086", 00:19:19.291 "model_number": "SPDK bdev Controller", 00:19:19.291 "serial_number": "00000000000000000000", 00:19:19.291 "firmware_revision": "25.01", 00:19:19.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:19.291 "oacs": { 00:19:19.291 "security": 0, 00:19:19.291 "format": 0, 00:19:19.291 "firmware": 0, 00:19:19.291 "ns_manage": 0 00:19:19.291 }, 00:19:19.291 "multi_ctrlr": true, 00:19:19.291 "ana_reporting": false 00:19:19.291 }, 00:19:19.291 "vs": { 00:19:19.291 "nvme_version": "1.3" 00:19:19.291 }, 00:19:19.291 "ns_data": { 00:19:19.291 "id": 1, 00:19:19.291 "can_share": true 00:19:19.291 } 00:19:19.291 } 00:19:19.291 ], 00:19:19.291 "mp_policy": "active_passive" 00:19:19.291 } 00:19:19.291 } 00:19:19.291 ] 00:19:19.291 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.291 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:19.291 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.291 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 [2024-11-20 18:04:37.826856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:19.552 [2024-11-20 18:04:37.850279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:19.552 [2024-11-20 18:04:37.876312] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 [ 00:19:19.552 { 00:19:19.552 "name": "nvme0n1", 00:19:19.552 "aliases": [ 00:19:19.552 "7198ad3e-cadb-4f2e-b55e-1efeadda2e85" 00:19:19.552 ], 00:19:19.552 "product_name": "NVMe disk", 00:19:19.552 "block_size": 512, 00:19:19.552 "num_blocks": 2097152, 00:19:19.552 "uuid": "7198ad3e-cadb-4f2e-b55e-1efeadda2e85", 00:19:19.552 "numa_id": 1, 00:19:19.552 "assigned_rate_limits": { 00:19:19.552 "rw_ios_per_sec": 0, 00:19:19.552 "rw_mbytes_per_sec": 0, 00:19:19.552 "r_mbytes_per_sec": 0, 00:19:19.552 "w_mbytes_per_sec": 0 00:19:19.552 }, 00:19:19.552 "claimed": false, 00:19:19.552 "zoned": false, 00:19:19.552 "supported_io_types": { 00:19:19.552 "read": true, 00:19:19.552 "write": true, 00:19:19.552 "unmap": false, 00:19:19.552 "flush": true, 00:19:19.552 "reset": true, 00:19:19.552 "nvme_admin": true, 00:19:19.552 "nvme_io": true, 00:19:19.552 "nvme_io_md": false, 00:19:19.552 "write_zeroes": true, 00:19:19.552 "zcopy": false, 00:19:19.552 "get_zone_info": false, 00:19:19.552 "zone_management": false, 00:19:19.552 "zone_append": false, 00:19:19.552 "compare": true, 00:19:19.552 "compare_and_write": true, 00:19:19.552 "abort": true, 00:19:19.552 "seek_hole": false, 00:19:19.552 "seek_data": false, 00:19:19.552 "copy": true, 00:19:19.552 "nvme_iov_md": false 00:19:19.552 }, 00:19:19.552 "memory_domains": [ 00:19:19.552 { 00:19:19.552 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:19.552 "dma_device_type": 0 00:19:19.552 } 00:19:19.552 ], 00:19:19.552 "driver_specific": { 00:19:19.552 "nvme": [ 00:19:19.552 { 00:19:19.552 "trid": { 00:19:19.552 "trtype": "RDMA", 00:19:19.552 "adrfam": "IPv4", 00:19:19.552 "traddr": "192.168.100.8", 00:19:19.552 "trsvcid": "4420", 00:19:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:19.552 }, 00:19:19.552 "ctrlr_data": { 00:19:19.552 "cntlid": 2, 00:19:19.552 "vendor_id": "0x8086", 00:19:19.552 "model_number": "SPDK bdev Controller", 00:19:19.552 "serial_number": "00000000000000000000", 00:19:19.552 "firmware_revision": "25.01", 00:19:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:19.552 "oacs": { 00:19:19.552 "security": 0, 00:19:19.552 "format": 0, 00:19:19.552 "firmware": 0, 00:19:19.552 "ns_manage": 0 00:19:19.552 }, 00:19:19.552 "multi_ctrlr": true, 00:19:19.552 "ana_reporting": false 00:19:19.552 }, 00:19:19.552 "vs": { 00:19:19.552 "nvme_version": "1.3" 00:19:19.552 }, 00:19:19.552 "ns_data": { 00:19:19.552 "id": 1, 00:19:19.552 "can_share": true 00:19:19.552 } 00:19:19.552 } 00:19:19.552 ], 00:19:19.552 "mp_policy": "active_passive" 00:19:19.552 } 00:19:19.552 } 00:19:19.552 ] 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.OtuwloCiRg 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.OtuwloCiRg 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.OtuwloCiRg 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 [2024-11-20 18:04:37.981167] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.552 18:04:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 [2024-11-20 18:04:38.005237] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.552 nvme0n1 00:19:19.552 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.552 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:19.552 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.552 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.813 [ 00:19:19.813 { 00:19:19.813 "name": "nvme0n1", 00:19:19.813 "aliases": [ 00:19:19.813 "7198ad3e-cadb-4f2e-b55e-1efeadda2e85" 00:19:19.813 ], 00:19:19.813 "product_name": "NVMe disk", 00:19:19.813 "block_size": 512, 00:19:19.813 "num_blocks": 2097152, 00:19:19.813 "uuid": "7198ad3e-cadb-4f2e-b55e-1efeadda2e85", 00:19:19.813 "numa_id": 1, 00:19:19.813 "assigned_rate_limits": { 00:19:19.813 "rw_ios_per_sec": 0, 00:19:19.813 "rw_mbytes_per_sec": 0, 00:19:19.813 "r_mbytes_per_sec": 0, 00:19:19.813 "w_mbytes_per_sec": 0 00:19:19.813 }, 00:19:19.813 "claimed": false, 00:19:19.813 "zoned": false, 00:19:19.813 "supported_io_types": { 00:19:19.813 "read": true, 00:19:19.813 "write": true, 00:19:19.813 "unmap": false, 00:19:19.813 "flush": true, 00:19:19.813 "reset": true, 00:19:19.813 "nvme_admin": true, 00:19:19.813 "nvme_io": true, 00:19:19.813 "nvme_io_md": false, 00:19:19.813 "write_zeroes": true, 00:19:19.813 "zcopy": false, 00:19:19.813 "get_zone_info": false, 00:19:19.813 "zone_management": false, 00:19:19.813 "zone_append": false, 00:19:19.813 "compare": true, 00:19:19.813 "compare_and_write": true, 00:19:19.813 "abort": true, 00:19:19.813 "seek_hole": false, 00:19:19.813 "seek_data": false, 00:19:19.813 "copy": true, 00:19:19.813 "nvme_iov_md": false 00:19:19.813 }, 00:19:19.813 "memory_domains": [ 00:19:19.813 { 00:19:19.813 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:19.813 "dma_device_type": 0 00:19:19.813 } 00:19:19.813 ], 00:19:19.813 "driver_specific": { 00:19:19.813 "nvme": [ 00:19:19.813 { 00:19:19.813 "trid": { 00:19:19.813 "trtype": "RDMA", 00:19:19.813 "adrfam": "IPv4", 00:19:19.813 "traddr": "192.168.100.8", 00:19:19.813 "trsvcid": "4421", 00:19:19.813 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:19.813 }, 00:19:19.813 "ctrlr_data": { 00:19:19.813 "cntlid": 3, 00:19:19.813 "vendor_id": "0x8086", 00:19:19.813 "model_number": "SPDK bdev Controller", 00:19:19.813 "serial_number": "00000000000000000000", 00:19:19.813 "firmware_revision": "25.01", 00:19:19.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:19.813 "oacs": { 00:19:19.813 "security": 0, 00:19:19.813 "format": 0, 00:19:19.813 "firmware": 0, 00:19:19.813 "ns_manage": 0 00:19:19.813 }, 00:19:19.813 "multi_ctrlr": true, 00:19:19.813 "ana_reporting": false 00:19:19.813 }, 00:19:19.813 "vs": { 00:19:19.813 "nvme_version": "1.3" 00:19:19.813 }, 00:19:19.813 "ns_data": { 00:19:19.814 "id": 1, 00:19:19.814 "can_share": true 00:19:19.814 } 00:19:19.814 } 00:19:19.814 ], 00:19:19.814 "mp_policy": "active_passive" 00:19:19.814 } 00:19:19.814 } 00:19:19.814 ] 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.OtuwloCiRg 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:19.814 rmmod nvme_rdma 00:19:19.814 rmmod nvme_fabrics 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3595027 ']' 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3595027 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3595027 ']' 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3595027 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3595027 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3595027' 00:19:19.814 killing process with pid 3595027 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3595027 00:19:19.814 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3595027 00:19:20.075 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:20.075 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:20.075 00:19:20.075 real 0m10.350s 00:19:20.075 user 0m4.326s 00:19:20.075 sys 0m6.657s 00:19:20.075 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.075 18:04:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:20.075 ************************************ 00:19:20.075 END TEST nvmf_async_init 00:19:20.075 ************************************ 00:19:20.075 18:04:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:19:20.075 18:04:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.075 18:04:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.075 18:04:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.075 ************************************ 00:19:20.075 START TEST dma 00:19:20.075 ************************************ 00:19:20.075 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:19:20.075 * Looking for test storage... 00:19:20.337 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.337 --rc genhtml_branch_coverage=1 00:19:20.337 --rc genhtml_function_coverage=1 00:19:20.337 --rc genhtml_legend=1 00:19:20.337 --rc geninfo_all_blocks=1 00:19:20.337 --rc geninfo_unexecuted_blocks=1 00:19:20.337 00:19:20.337 ' 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.337 --rc genhtml_branch_coverage=1 00:19:20.337 --rc genhtml_function_coverage=1 00:19:20.337 --rc genhtml_legend=1 00:19:20.337 --rc geninfo_all_blocks=1 00:19:20.337 --rc geninfo_unexecuted_blocks=1 00:19:20.337 00:19:20.337 ' 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.337 --rc genhtml_branch_coverage=1 00:19:20.337 --rc genhtml_function_coverage=1 00:19:20.337 --rc genhtml_legend=1 00:19:20.337 --rc geninfo_all_blocks=1 00:19:20.337 --rc geninfo_unexecuted_blocks=1 00:19:20.337 00:19:20.337 ' 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.337 --rc genhtml_branch_coverage=1 00:19:20.337 --rc genhtml_function_coverage=1 00:19:20.337 --rc genhtml_legend=1 00:19:20.337 --rc geninfo_all_blocks=1 00:19:20.337 --rc geninfo_unexecuted_blocks=1 00:19:20.337 00:19:20.337 ' 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.337 18:04:38 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.338 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:19:20.338 18:04:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:28.476 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.476 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:19:28.476 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:28.476 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:28.476 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:28.476 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:28.476 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:28.476 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:19:28.476 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:28.476 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:28.477 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:28.477 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:28.477 Found net devices under 0000:98:00.0: mlx_0_0 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:28.477 Found net devices under 0000:98:00.1: mlx_0_1 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:28.477 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.477 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:19:28.477 altname enp152s0f0np0 00:19:28.477 altname ens817f0np0 00:19:28.477 inet 192.168.100.8/24 scope global mlx_0_0 00:19:28.477 valid_lft forever preferred_lft forever 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:28.477 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:28.477 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.477 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:19:28.477 altname enp152s0f1np1 00:19:28.477 altname ens817f1np1 00:19:28.477 inet 192.168.100.9/24 scope global mlx_0_1 00:19:28.477 valid_lft forever preferred_lft forever 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:28.478 192.168.100.9' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:28.478 192.168.100.9' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:28.478 192.168.100.9' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=3599594 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 3599594 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 3599594 ']' 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:28.478 18:04:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:28.478 [2024-11-20 18:04:46.968468] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:28.478 [2024-11-20 18:04:46.968542] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.738 [2024-11-20 18:04:47.052036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.738 [2024-11-20 18:04:47.094464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.738 [2024-11-20 18:04:47.094499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.738 [2024-11-20 18:04:47.094507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.738 [2024-11-20 18:04:47.094514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.738 [2024-11-20 18:04:47.094520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.738 [2024-11-20 18:04:47.095804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.738 [2024-11-20 18:04:47.095808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.309 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.309 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:19:29.309 18:04:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:29.309 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.309 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:29.309 18:04:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.309 18:04:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:19:29.309 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.309 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:29.309 [2024-11-20 18:04:47.841955] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24767a0/0x247ac90) succeed. 00:19:29.568 [2024-11-20 18:04:47.855457] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2477cf0/0x24bc330) succeed. 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:29.568 Malloc0 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:29.568 [2024-11-20 18:04:47.994461] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:29.568 18:04:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.569 18:04:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:19:29.569 18:04:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:19:29.569 18:04:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:19:29.569 18:04:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:19:29.569 18:04:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:29.569 18:04:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:29.569 { 00:19:29.569 "params": { 00:19:29.569 "name": "Nvme$subsystem", 00:19:29.569 "trtype": "$TEST_TRANSPORT", 00:19:29.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.569 "adrfam": "ipv4", 00:19:29.569 "trsvcid": "$NVMF_PORT", 00:19:29.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.569 "hdgst": ${hdgst:-false}, 00:19:29.569 "ddgst": ${ddgst:-false} 00:19:29.569 }, 00:19:29.569 "method": "bdev_nvme_attach_controller" 00:19:29.569 } 00:19:29.569 EOF 00:19:29.569 )") 00:19:29.569 18:04:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:19:29.569 18:04:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:19:29.569 18:04:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:19:29.569 18:04:48 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:29.569 "params": { 00:19:29.569 "name": "Nvme0", 00:19:29.569 "trtype": "rdma", 00:19:29.569 "traddr": "192.168.100.8", 00:19:29.569 "adrfam": "ipv4", 00:19:29.569 "trsvcid": "4420", 00:19:29.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:29.569 "hdgst": false, 00:19:29.569 "ddgst": false 00:19:29.569 }, 00:19:29.569 "method": "bdev_nvme_attach_controller" 00:19:29.569 }' 00:19:29.569 [2024-11-20 18:04:48.053610] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:29.569 [2024-11-20 18:04:48.053707] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599754 ] 00:19:29.829 [2024-11-20 18:04:48.121041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:29.829 [2024-11-20 18:04:48.151188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.829 [2024-11-20 18:04:48.151275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.118 bdev Nvme0n1 reports 1 memory domains 00:19:35.118 bdev Nvme0n1 supports RDMA memory domain 00:19:35.118 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:35.118 ========================================================================== 00:19:35.118 Latency [us] 00:19:35.118 IOPS MiB/s Average min max 00:19:35.118 Core 2: 23938.79 93.51 667.93 334.12 6904.97 00:19:35.118 Core 3: 26573.61 103.80 601.45 240.84 16553.79 00:19:35.118 ========================================================================== 00:19:35.118 Total : 50512.40 197.31 632.95 240.84 16553.79 00:19:35.118 00:19:35.118 Total operations: 252579, translate 252579 pull_push 0 memzero 0 00:19:35.118 18:04:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:19:35.118 18:04:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:19:35.118 18:04:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:19:35.118 [2024-11-20 18:04:53.504868] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:35.118 [2024-11-20 18:04:53.504923] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600786 ] 00:19:35.118 [2024-11-20 18:04:53.569899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:35.118 [2024-11-20 18:04:53.598280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.118 [2024-11-20 18:04:53.598280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.402 bdev Malloc0 reports 2 memory domains 00:19:40.402 bdev Malloc0 doesn't support RDMA memory domain 00:19:40.402 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:40.402 ========================================================================== 00:19:40.402 Latency [us] 00:19:40.402 IOPS MiB/s Average min max 00:19:40.402 Core 2: 18423.33 71.97 867.94 344.62 1344.61 00:19:40.402 Core 3: 18499.11 72.26 864.38 353.19 1560.80 00:19:40.402 ========================================================================== 00:19:40.402 Total : 36922.43 144.23 866.16 344.62 1560.80 00:19:40.402 00:19:40.402 Total operations: 184667, translate 0 pull_push 738668 memzero 0 00:19:40.402 18:04:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:19:40.402 18:04:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:19:40.402 18:04:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:19:40.402 18:04:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:19:40.402 Ignoring -M option 00:19:40.402 [2024-11-20 18:04:58.829850] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:40.402 [2024-11-20 18:04:58.829912] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601820 ] 00:19:40.402 [2024-11-20 18:04:58.894702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:40.402 [2024-11-20 18:04:58.922723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.402 [2024-11-20 18:04:58.922723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.781 bdev 000a86e3-e89a-4449-a2c6-9b762afdc6cb reports 1 memory domains 00:19:45.781 bdev 000a86e3-e89a-4449-a2c6-9b762afdc6cb supports RDMA memory domain 00:19:45.781 Initialization complete, running randread IO for 5 sec on 2 cores 00:19:45.781 ========================================================================== 00:19:45.781 Latency [us] 00:19:45.781 IOPS MiB/s Average min max 00:19:45.781 Core 2: 117945.08 460.72 135.14 55.08 4104.30 00:19:45.781 Core 3: 123244.10 481.42 129.32 57.50 4064.04 00:19:45.781 ========================================================================== 00:19:45.781 Total : 241189.19 942.15 132.16 55.08 4104.30 00:19:45.781 00:19:45.781 Total operations: 1206033, translate 0 pull_push 0 memzero 1206033 00:19:45.781 18:05:04 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:19:46.040 [2024-11-20 18:05:04.409546] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:48.583 Initializing NVMe Controllers 00:19:48.583 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:19:48.583 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:48.583 Initialization complete. Launching workers. 00:19:48.583 ======================================================== 00:19:48.583 Latency(us) 00:19:48.583 Device Information : IOPS MiB/s Average min max 00:19:48.583 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2012.70 7.86 7917.24 4988.44 7984.27 00:19:48.583 ======================================================== 00:19:48.583 Total : 2012.70 7.86 7917.24 4988.44 7984.27 00:19:48.583 00:19:48.583 18:05:06 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:19:48.583 18:05:06 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:19:48.583 18:05:06 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:19:48.583 18:05:06 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:19:48.583 [2024-11-20 18:05:06.782067] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:48.583 [2024-11-20 18:05:06.782116] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603446 ] 00:19:48.583 [2024-11-20 18:05:06.845879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:48.583 [2024-11-20 18:05:06.875261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.583 [2024-11-20 18:05:06.875262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.868 bdev d57f210b-5d5a-48ae-9cf2-69b9ed3d53c7 reports 1 memory domains 00:19:53.868 bdev d57f210b-5d5a-48ae-9cf2-69b9ed3d53c7 supports RDMA memory domain 00:19:53.868 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:53.868 ========================================================================== 00:19:53.868 Latency [us] 00:19:53.868 IOPS MiB/s Average min max 00:19:53.868 Core 2: 20713.85 80.91 771.90 11.79 16572.76 00:19:53.868 Core 3: 26678.91 104.21 599.25 8.03 16250.68 00:19:53.868 ========================================================================== 00:19:53.868 Total : 47392.77 185.13 674.71 8.03 16572.76 00:19:53.868 00:19:53.868 Total operations: 237009, translate 236900 pull_push 0 memzero 109 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:53.868 rmmod nvme_rdma 00:19:53.868 rmmod nvme_fabrics 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 3599594 ']' 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 3599594 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 3599594 ']' 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 3599594 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3599594 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3599594' 00:19:53.868 killing process with pid 3599594 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 3599594 00:19:53.868 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 3599594 00:19:54.128 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:54.128 18:05:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:54.128 00:19:54.128 real 0m34.047s 00:19:54.128 user 1m35.385s 00:19:54.128 sys 0m7.169s 00:19:54.128 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.128 18:05:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:54.128 ************************************ 00:19:54.128 END TEST dma 00:19:54.128 ************************************ 00:19:54.128 18:05:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:54.128 18:05:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:54.128 18:05:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.128 18:05:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.128 ************************************ 00:19:54.128 START TEST nvmf_identify 00:19:54.128 ************************************ 00:19:54.128 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:54.389 * Looking for test storage... 00:19:54.389 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.389 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:54.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.390 --rc genhtml_branch_coverage=1 00:19:54.390 --rc genhtml_function_coverage=1 00:19:54.390 --rc genhtml_legend=1 00:19:54.390 --rc geninfo_all_blocks=1 00:19:54.390 --rc geninfo_unexecuted_blocks=1 00:19:54.390 00:19:54.390 ' 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:54.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.390 --rc genhtml_branch_coverage=1 00:19:54.390 --rc genhtml_function_coverage=1 00:19:54.390 --rc genhtml_legend=1 00:19:54.390 --rc geninfo_all_blocks=1 00:19:54.390 --rc geninfo_unexecuted_blocks=1 00:19:54.390 00:19:54.390 ' 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:54.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.390 --rc genhtml_branch_coverage=1 00:19:54.390 --rc genhtml_function_coverage=1 00:19:54.390 --rc genhtml_legend=1 00:19:54.390 --rc geninfo_all_blocks=1 00:19:54.390 --rc geninfo_unexecuted_blocks=1 00:19:54.390 00:19:54.390 ' 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:54.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.390 --rc genhtml_branch_coverage=1 00:19:54.390 --rc genhtml_function_coverage=1 00:19:54.390 --rc genhtml_legend=1 00:19:54.390 --rc geninfo_all_blocks=1 00:19:54.390 --rc geninfo_unexecuted_blocks=1 00:19:54.390 00:19:54.390 ' 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:19:54.390 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:54.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:19:54.391 18:05:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:20:02.528 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:20:02.528 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:20:02.528 Found net devices under 0000:98:00.0: mlx_0_0 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:20:02.528 Found net devices under 0000:98:00.1: mlx_0_1 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:02.528 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:02.529 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:02.529 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:20:02.529 altname enp152s0f0np0 00:20:02.529 altname ens817f0np0 00:20:02.529 inet 192.168.100.8/24 scope global mlx_0_0 00:20:02.529 valid_lft forever preferred_lft forever 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:02.529 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:02.529 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:20:02.529 altname enp152s0f1np1 00:20:02.529 altname ens817f1np1 00:20:02.529 inet 192.168.100.9/24 scope global mlx_0_1 00:20:02.529 valid_lft forever preferred_lft forever 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:02.529 192.168.100.9' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:02.529 192.168.100.9' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:02.529 192.168.100.9' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.529 18:05:20 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.529 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3608787 00:20:02.529 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:02.529 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:02.529 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3608787 00:20:02.530 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3608787 ']' 00:20:02.530 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.530 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.530 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.530 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.530 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.530 [2024-11-20 18:05:21.051229] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:20:02.530 [2024-11-20 18:05:21.051290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.790 [2024-11-20 18:05:21.133921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.790 [2024-11-20 18:05:21.174513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.790 [2024-11-20 18:05:21.174547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.790 [2024-11-20 18:05:21.174555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.790 [2024-11-20 18:05:21.174562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.790 [2024-11-20 18:05:21.174568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.790 [2024-11-20 18:05:21.176242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.790 [2024-11-20 18:05:21.176358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.790 [2024-11-20 18:05:21.176514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.790 [2024-11-20 18:05:21.176515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.359 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.359 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:20:03.359 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:03.359 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.359 18:05:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.359 [2024-11-20 18:05:21.891368] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1289f30/0x128e420) succeed. 00:20:03.619 [2024-11-20 18:05:21.904428] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x128b5c0/0x12cfac0) succeed. 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.619 Malloc0 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.619 [2024-11-20 18:05:22.127509] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.619 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.884 [ 00:20:03.884 { 00:20:03.884 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:03.884 "subtype": "Discovery", 00:20:03.884 "listen_addresses": [ 00:20:03.884 { 00:20:03.884 "trtype": "RDMA", 00:20:03.884 "adrfam": "IPv4", 00:20:03.884 "traddr": "192.168.100.8", 00:20:03.884 "trsvcid": "4420" 00:20:03.884 } 00:20:03.884 ], 00:20:03.884 "allow_any_host": true, 00:20:03.884 "hosts": [] 00:20:03.884 }, 00:20:03.884 { 00:20:03.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.884 "subtype": "NVMe", 00:20:03.884 "listen_addresses": [ 00:20:03.884 { 00:20:03.884 "trtype": "RDMA", 00:20:03.884 "adrfam": "IPv4", 00:20:03.884 "traddr": "192.168.100.8", 00:20:03.884 "trsvcid": "4420" 00:20:03.884 } 00:20:03.884 ], 00:20:03.884 "allow_any_host": true, 00:20:03.884 "hosts": [], 00:20:03.884 "serial_number": "SPDK00000000000001", 00:20:03.884 "model_number": "SPDK bdev Controller", 00:20:03.884 "max_namespaces": 32, 00:20:03.884 "min_cntlid": 1, 00:20:03.884 "max_cntlid": 65519, 00:20:03.884 "namespaces": [ 00:20:03.884 { 00:20:03.884 "nsid": 1, 00:20:03.884 "bdev_name": "Malloc0", 00:20:03.884 "name": "Malloc0", 00:20:03.884 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:03.884 "eui64": "ABCDEF0123456789", 00:20:03.884 "uuid": "9f61e4fd-f090-4bf3-9ebe-fcaa50494294" 00:20:03.884 } 00:20:03.884 ] 00:20:03.884 } 00:20:03.884 ] 00:20:03.884 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.884 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:03.884 [2024-11-20 18:05:22.191051] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:20:03.884 [2024-11-20 18:05:22.191095] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609081 ] 00:20:03.884 [2024-11-20 18:05:22.267812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:03.884 [2024-11-20 18:05:22.267895] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:03.884 [2024-11-20 18:05:22.267914] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:03.884 [2024-11-20 18:05:22.267919] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:03.884 [2024-11-20 18:05:22.267952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:03.884 [2024-11-20 18:05:22.282388] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:03.884 [2024-11-20 18:05:22.304079] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:03.884 [2024-11-20 18:05:22.304089] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:03.884 [2024-11-20 18:05:22.304097] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304103] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304108] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304113] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304118] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304124] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304129] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304134] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304139] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304144] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304149] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304154] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304159] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304164] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304169] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304174] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304180] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304185] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304190] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304195] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304200] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304209] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304214] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304219] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304224] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304229] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304234] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304239] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304245] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304250] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304255] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304259] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:03.884 [2024-11-20 18:05:22.304264] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:03.884 [2024-11-20 18:05:22.304267] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:03.884 [2024-11-20 18:05:22.304283] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.304295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x183300 00:20:03.884 [2024-11-20 18:05:22.310866] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.884 [2024-11-20 18:05:22.310875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:03.884 [2024-11-20 18:05:22.310882] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183300 00:20:03.884 [2024-11-20 18:05:22.310889] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:03.885 [2024-11-20 18:05:22.310896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:03.885 [2024-11-20 18:05:22.310901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:03.885 [2024-11-20 18:05:22.310912] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.310920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.885 [2024-11-20 18:05:22.310945] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.885 [2024-11-20 18:05:22.310950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:03.885 [2024-11-20 18:05:22.310956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:03.885 [2024-11-20 18:05:22.310961] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.310967] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:03.885 [2024-11-20 18:05:22.310974] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.310981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.885 [2024-11-20 18:05:22.311009] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.885 [2024-11-20 18:05:22.311014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:03.885 [2024-11-20 18:05:22.311020] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:03.885 [2024-11-20 18:05:22.311025] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:03.885 [2024-11-20 18:05:22.311038] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.885 [2024-11-20 18:05:22.311068] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.885 [2024-11-20 18:05:22.311073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:03.885 [2024-11-20 18:05:22.311078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:03.885 [2024-11-20 18:05:22.311083] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311091] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.885 [2024-11-20 18:05:22.311117] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.885 [2024-11-20 18:05:22.311122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:03.885 [2024-11-20 18:05:22.311127] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:03.885 [2024-11-20 18:05:22.311132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:03.885 [2024-11-20 18:05:22.311137] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:03.885 [2024-11-20 18:05:22.311250] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:03.885 [2024-11-20 18:05:22.311255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:03.885 [2024-11-20 18:05:22.311264] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.885 [2024-11-20 18:05:22.311295] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.885 [2024-11-20 18:05:22.311300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:03.885 [2024-11-20 18:05:22.311305] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:03.885 [2024-11-20 18:05:22.311310] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311320] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.885 [2024-11-20 18:05:22.311345] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.885 [2024-11-20 18:05:22.311350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:03.885 [2024-11-20 18:05:22.311355] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:03.885 [2024-11-20 18:05:22.311360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:03.885 [2024-11-20 18:05:22.311365] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311371] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:03.885 [2024-11-20 18:05:22.311383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:03.885 [2024-11-20 18:05:22.311392] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183300 00:20:03.885 [2024-11-20 18:05:22.311444] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.885 [2024-11-20 18:05:22.311448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:03.885 [2024-11-20 18:05:22.311456] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:03.885 [2024-11-20 18:05:22.311461] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:03.885 [2024-11-20 18:05:22.311465] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:03.885 [2024-11-20 18:05:22.311470] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:03.885 [2024-11-20 18:05:22.311475] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:03.885 [2024-11-20 18:05:22.311480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:03.885 [2024-11-20 18:05:22.311484] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:03.885 [2024-11-20 18:05:22.311500] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.885 [2024-11-20 18:05:22.311507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.885 [2024-11-20 18:05:22.311532] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.885 [2024-11-20 18:05:22.311536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:03.885 [2024-11-20 18:05:22.311544] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.886 [2024-11-20 18:05:22.311558] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.886 [2024-11-20 18:05:22.311571] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.886 [2024-11-20 18:05:22.311583] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.886 [2024-11-20 18:05:22.311593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:03.886 [2024-11-20 18:05:22.311598] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:03.886 [2024-11-20 18:05:22.311614] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.886 [2024-11-20 18:05:22.311642] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.886 [2024-11-20 18:05:22.311647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:03.886 [2024-11-20 18:05:22.311652] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:03.886 [2024-11-20 18:05:22.311657] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:03.886 [2024-11-20 18:05:22.311662] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311671] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183300 00:20:03.886 [2024-11-20 18:05:22.311706] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.886 [2024-11-20 18:05:22.311711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:03.886 [2024-11-20 18:05:22.311717] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311725] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:03.886 [2024-11-20 18:05:22.311746] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183300 00:20:03.886 [2024-11-20 18:05:22.311760] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.886 [2024-11-20 18:05:22.311788] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.886 [2024-11-20 18:05:22.311795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:03.886 [2024-11-20 18:05:22.311806] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183300 00:20:03.886 [2024-11-20 18:05:22.311817] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311823] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.886 [2024-11-20 18:05:22.311827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:03.886 [2024-11-20 18:05:22.311832] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311846] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.886 [2024-11-20 18:05:22.311851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:03.886 [2024-11-20 18:05:22.311860] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183300 00:20:03.886 [2024-11-20 18:05:22.311876] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183300 00:20:03.886 [2024-11-20 18:05:22.311907] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.886 [2024-11-20 18:05:22.311912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:03.886 [2024-11-20 18:05:22.311921] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183300 00:20:03.886 ===================================================== 00:20:03.886 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:03.886 ===================================================== 00:20:03.886 Controller Capabilities/Features 00:20:03.886 ================================ 00:20:03.886 Vendor ID: 0000 00:20:03.886 Subsystem Vendor ID: 0000 00:20:03.886 Serial Number: .................... 00:20:03.886 Model Number: ........................................ 00:20:03.886 Firmware Version: 25.01 00:20:03.886 Recommended Arb Burst: 0 00:20:03.886 IEEE OUI Identifier: 00 00 00 00:20:03.886 Multi-path I/O 00:20:03.886 May have multiple subsystem ports: No 00:20:03.886 May have multiple controllers: No 00:20:03.886 Associated with SR-IOV VF: No 00:20:03.886 Max Data Transfer Size: 131072 00:20:03.886 Max Number of Namespaces: 0 00:20:03.886 Max Number of I/O Queues: 1024 00:20:03.886 NVMe Specification Version (VS): 1.3 00:20:03.886 NVMe Specification Version (Identify): 1.3 00:20:03.886 Maximum Queue Entries: 128 00:20:03.886 Contiguous Queues Required: Yes 00:20:03.886 Arbitration Mechanisms Supported 00:20:03.886 Weighted Round Robin: Not Supported 00:20:03.886 Vendor Specific: Not Supported 00:20:03.886 Reset Timeout: 15000 ms 00:20:03.886 Doorbell Stride: 4 bytes 00:20:03.886 NVM Subsystem Reset: Not Supported 00:20:03.886 Command Sets Supported 00:20:03.886 NVM Command Set: Supported 00:20:03.886 Boot Partition: Not Supported 00:20:03.886 Memory Page Size Minimum: 4096 bytes 00:20:03.886 Memory Page Size Maximum: 4096 bytes 00:20:03.886 Persistent Memory Region: Not Supported 00:20:03.886 Optional Asynchronous Events Supported 00:20:03.886 Namespace Attribute Notices: Not Supported 00:20:03.886 Firmware Activation Notices: Not Supported 00:20:03.886 ANA Change Notices: Not Supported 00:20:03.886 PLE Aggregate Log Change Notices: Not Supported 00:20:03.886 LBA Status Info Alert Notices: Not Supported 00:20:03.886 EGE Aggregate Log Change Notices: Not Supported 00:20:03.886 Normal NVM Subsystem Shutdown event: Not Supported 00:20:03.886 Zone Descriptor Change Notices: Not Supported 00:20:03.886 Discovery Log Change Notices: Supported 00:20:03.886 Controller Attributes 00:20:03.886 128-bit Host Identifier: Not Supported 00:20:03.886 Non-Operational Permissive Mode: Not Supported 00:20:03.887 NVM Sets: Not Supported 00:20:03.887 Read Recovery Levels: Not Supported 00:20:03.887 Endurance Groups: Not Supported 00:20:03.887 Predictable Latency Mode: Not Supported 00:20:03.887 Traffic Based Keep ALive: Not Supported 00:20:03.887 Namespace Granularity: Not Supported 00:20:03.887 SQ Associations: Not Supported 00:20:03.887 UUID List: Not Supported 00:20:03.887 Multi-Domain Subsystem: Not Supported 00:20:03.887 Fixed Capacity Management: Not Supported 00:20:03.887 Variable Capacity Management: Not Supported 00:20:03.887 Delete Endurance Group: Not Supported 00:20:03.887 Delete NVM Set: Not Supported 00:20:03.887 Extended LBA Formats Supported: Not Supported 00:20:03.887 Flexible Data Placement Supported: Not Supported 00:20:03.887 00:20:03.887 Controller Memory Buffer Support 00:20:03.887 ================================ 00:20:03.887 Supported: No 00:20:03.887 00:20:03.887 Persistent Memory Region Support 00:20:03.887 ================================ 00:20:03.887 Supported: No 00:20:03.887 00:20:03.887 Admin Command Set Attributes 00:20:03.887 ============================ 00:20:03.887 Security Send/Receive: Not Supported 00:20:03.887 Format NVM: Not Supported 00:20:03.887 Firmware Activate/Download: Not Supported 00:20:03.887 Namespace Management: Not Supported 00:20:03.887 Device Self-Test: Not Supported 00:20:03.887 Directives: Not Supported 00:20:03.887 NVMe-MI: Not Supported 00:20:03.887 Virtualization Management: Not Supported 00:20:03.887 Doorbell Buffer Config: Not Supported 00:20:03.887 Get LBA Status Capability: Not Supported 00:20:03.887 Command & Feature Lockdown Capability: Not Supported 00:20:03.887 Abort Command Limit: 1 00:20:03.887 Async Event Request Limit: 4 00:20:03.887 Number of Firmware Slots: N/A 00:20:03.887 Firmware Slot 1 Read-Only: N/A 00:20:03.887 Firmware Activation Without Reset: N/A 00:20:03.887 Multiple Update Detection Support: N/A 00:20:03.887 Firmware Update Granularity: No Information Provided 00:20:03.887 Per-Namespace SMART Log: No 00:20:03.887 Asymmetric Namespace Access Log Page: Not Supported 00:20:03.887 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:03.887 Command Effects Log Page: Not Supported 00:20:03.887 Get Log Page Extended Data: Supported 00:20:03.887 Telemetry Log Pages: Not Supported 00:20:03.887 Persistent Event Log Pages: Not Supported 00:20:03.887 Supported Log Pages Log Page: May Support 00:20:03.887 Commands Supported & Effects Log Page: Not Supported 00:20:03.887 Feature Identifiers & Effects Log Page:May Support 00:20:03.887 NVMe-MI Commands & Effects Log Page: May Support 00:20:03.887 Data Area 4 for Telemetry Log: Not Supported 00:20:03.887 Error Log Page Entries Supported: 128 00:20:03.887 Keep Alive: Not Supported 00:20:03.887 00:20:03.887 NVM Command Set Attributes 00:20:03.887 ========================== 00:20:03.887 Submission Queue Entry Size 00:20:03.887 Max: 1 00:20:03.887 Min: 1 00:20:03.887 Completion Queue Entry Size 00:20:03.887 Max: 1 00:20:03.887 Min: 1 00:20:03.887 Number of Namespaces: 0 00:20:03.887 Compare Command: Not Supported 00:20:03.887 Write Uncorrectable Command: Not Supported 00:20:03.887 Dataset Management Command: Not Supported 00:20:03.887 Write Zeroes Command: Not Supported 00:20:03.887 Set Features Save Field: Not Supported 00:20:03.887 Reservations: Not Supported 00:20:03.887 Timestamp: Not Supported 00:20:03.887 Copy: Not Supported 00:20:03.887 Volatile Write Cache: Not Present 00:20:03.887 Atomic Write Unit (Normal): 1 00:20:03.887 Atomic Write Unit (PFail): 1 00:20:03.887 Atomic Compare & Write Unit: 1 00:20:03.887 Fused Compare & Write: Supported 00:20:03.887 Scatter-Gather List 00:20:03.887 SGL Command Set: Supported 00:20:03.887 SGL Keyed: Supported 00:20:03.887 SGL Bit Bucket Descriptor: Not Supported 00:20:03.887 SGL Metadata Pointer: Not Supported 00:20:03.887 Oversized SGL: Not Supported 00:20:03.887 SGL Metadata Address: Not Supported 00:20:03.887 SGL Offset: Supported 00:20:03.887 Transport SGL Data Block: Not Supported 00:20:03.887 Replay Protected Memory Block: Not Supported 00:20:03.887 00:20:03.887 Firmware Slot Information 00:20:03.887 ========================= 00:20:03.887 Active slot: 0 00:20:03.887 00:20:03.887 00:20:03.887 Error Log 00:20:03.887 ========= 00:20:03.887 00:20:03.887 Active Namespaces 00:20:03.887 ================= 00:20:03.887 Discovery Log Page 00:20:03.887 ================== 00:20:03.887 Generation Counter: 2 00:20:03.887 Number of Records: 2 00:20:03.887 Record Format: 0 00:20:03.887 00:20:03.887 Discovery Log Entry 0 00:20:03.887 ---------------------- 00:20:03.887 Transport Type: 1 (RDMA) 00:20:03.887 Address Family: 1 (IPv4) 00:20:03.887 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:03.887 Entry Flags: 00:20:03.887 Duplicate Returned Information: 1 00:20:03.887 Explicit Persistent Connection Support for Discovery: 1 00:20:03.887 Transport Requirements: 00:20:03.887 Secure Channel: Not Required 00:20:03.887 Port ID: 0 (0x0000) 00:20:03.887 Controller ID: 65535 (0xffff) 00:20:03.887 Admin Max SQ Size: 128 00:20:03.887 Transport Service Identifier: 4420 00:20:03.887 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:03.887 Transport Address: 192.168.100.8 00:20:03.887 Transport Specific Address Subtype - RDMA 00:20:03.887 RDMA QP Service Type: 1 (Reliable Connected) 00:20:03.887 RDMA Provider Type: 1 (No provider specified) 00:20:03.887 RDMA CM Service: 1 (RDMA_CM) 00:20:03.887 Discovery Log Entry 1 00:20:03.887 ---------------------- 00:20:03.887 Transport Type: 1 (RDMA) 00:20:03.887 Address Family: 1 (IPv4) 00:20:03.887 Subsystem Type: 2 (NVM Subsystem) 00:20:03.887 Entry Flags: 00:20:03.887 Duplicate Returned Information: 0 00:20:03.887 Explicit Persistent Connection Support for Discovery: 0 00:20:03.887 Transport Requirements: 00:20:03.888 Secure Channel: Not Required 00:20:03.888 Port ID: 0 (0x0000) 00:20:03.888 Controller ID: 65535 (0xffff) 00:20:03.888 Admin Max SQ Size: [2024-11-20 18:05:22.311990] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:03.888 [2024-11-20 18:05:22.311999] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52558 doesn't match qid 00:20:03.888 [2024-11-20 18:05:22.312013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32709 cdw0:24ef27c0 sqhd:b320 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312019] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52558 doesn't match qid 00:20:03.888 [2024-11-20 18:05:22.312026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32709 cdw0:24ef27c0 sqhd:b320 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312031] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52558 doesn't match qid 00:20:03.888 [2024-11-20 18:05:22.312038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32709 cdw0:24ef27c0 sqhd:b320 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312043] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52558 doesn't match qid 00:20:03.888 [2024-11-20 18:05:22.312049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32709 cdw0:24ef27c0 sqhd:b320 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312057] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312085] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312098] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312113] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312133] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312144] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:03.888 [2024-11-20 18:05:22.312148] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:03.888 [2024-11-20 18:05:22.312153] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312161] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312194] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312204] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312213] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312244] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312254] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312263] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312292] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312303] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312312] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312337] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312348] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312356] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312389] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312400] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312409] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312440] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312452] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312460] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312488] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312498] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312506] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312537] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312548] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312557] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312589] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:03.888 [2024-11-20 18:05:22.312599] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312608] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.888 [2024-11-20 18:05:22.312615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.888 [2024-11-20 18:05:22.312636] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.888 [2024-11-20 18:05:22.312641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.312646] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312655] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.312683] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.312688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.312693] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312702] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.312730] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.312735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.312740] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312749] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.312776] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.312781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.312787] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312796] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.312826] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.312831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.312836] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312845] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.312882] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.312887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.312893] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312901] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.312928] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.312933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.312938] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312947] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.312955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.312983] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.312988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.312994] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313003] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.313031] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.313036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.313041] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313050] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.313082] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.313087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.313093] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313102] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.313132] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.313137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.313143] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313151] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.313178] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.313183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.313189] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313198] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.313228] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.313233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.313239] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313250] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.889 [2024-11-20 18:05:22.313279] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.889 [2024-11-20 18:05:22.313284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:03.889 [2024-11-20 18:05:22.313289] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313298] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.889 [2024-11-20 18:05:22.313305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313333] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313344] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313352] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313379] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313389] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313397] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313424] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313434] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313443] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313473] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313483] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313492] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313520] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313530] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313540] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313567] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313577] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313585] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313614] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313624] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313633] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313661] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313671] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313680] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313708] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313718] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313726] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313759] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313769] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313777] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313810] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313823] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313832] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313858] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313871] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313880] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313908] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313919] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313927] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.313955] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.313960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:03.890 [2024-11-20 18:05:22.313965] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313974] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.890 [2024-11-20 18:05:22.313981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.890 [2024-11-20 18:05:22.314008] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.890 [2024-11-20 18:05:22.314013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314019] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314027] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314056] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314066] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314074] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314105] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314116] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314125] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314151] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314161] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314170] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314198] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314208] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314217] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314247] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314257] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314266] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314292] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314302] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314311] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314345] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314355] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314364] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314396] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314408] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314417] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314445] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314455] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314464] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314496] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314506] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314515] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314543] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314553] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314562] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314592] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314602] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314610] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314645] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314655] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314664] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.891 [2024-11-20 18:05:22.314691] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.891 [2024-11-20 18:05:22.314696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:03.891 [2024-11-20 18:05:22.314702] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183300 00:20:03.891 [2024-11-20 18:05:22.314710] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.892 [2024-11-20 18:05:22.314717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.892 [2024-11-20 18:05:22.314738] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.892 [2024-11-20 18:05:22.314743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:03.892 [2024-11-20 18:05:22.314748] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183300 00:20:03.892 [2024-11-20 18:05:22.314757] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.892 [2024-11-20 18:05:22.314764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.892 [2024-11-20 18:05:22.314783] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.892 [2024-11-20 18:05:22.314788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:03.892 [2024-11-20 18:05:22.314793] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183300 00:20:03.892 [2024-11-20 18:05:22.314802] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.892 [2024-11-20 18:05:22.314808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.892 [2024-11-20 18:05:22.314826] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.892 [2024-11-20 18:05:22.314831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:03.892 [2024-11-20 18:05:22.314836] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183300 00:20:03.892 [2024-11-20 18:05:22.314845] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.892 [2024-11-20 18:05:22.314852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.892 [2024-11-20 18:05:22.318868] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.892 [2024-11-20 18:05:22.318875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:03.892 [2024-11-20 18:05:22.318881] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183300 00:20:03.892 [2024-11-20 18:05:22.318890] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:03.892 [2024-11-20 18:05:22.318896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:03.892 [2024-11-20 18:05:22.318916] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:03.892 [2024-11-20 18:05:22.318921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:20:03.892 [2024-11-20 18:05:22.318926] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183300 00:20:03.892 [2024-11-20 18:05:22.318932] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:20:03.892 128 00:20:03.892 Transport Service Identifier: 4420 00:20:03.892 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:03.892 Transport Address: 192.168.100.8 00:20:03.892 Transport Specific Address Subtype - RDMA 00:20:03.892 RDMA QP Service Type: 1 (Reliable Connected) 00:20:03.892 RDMA Provider Type: 1 (No provider specified) 00:20:03.892 RDMA CM Service: 1 (RDMA_CM) 00:20:03.892 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:03.892 [2024-11-20 18:05:22.404214] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:20:03.892 [2024-11-20 18:05:22.404284] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609148 ] 00:20:04.157 [2024-11-20 18:05:22.482473] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:04.157 [2024-11-20 18:05:22.482553] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:04.157 [2024-11-20 18:05:22.482569] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:04.157 [2024-11-20 18:05:22.482573] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:04.157 [2024-11-20 18:05:22.482601] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:04.157 [2024-11-20 18:05:22.491975] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:04.157 [2024-11-20 18:05:22.513067] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:04.157 [2024-11-20 18:05:22.513077] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:04.157 [2024-11-20 18:05:22.513085] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183300 00:20:04.157 [2024-11-20 18:05:22.513091] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183300 00:20:04.157 [2024-11-20 18:05:22.513096] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513101] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513106] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513112] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513117] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513122] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513127] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513132] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513138] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513143] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513148] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513153] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513161] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513167] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513172] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513177] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513182] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513187] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513193] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513198] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513203] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513208] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513213] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513218] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513224] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513229] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513234] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513239] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513244] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513249] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:04.158 [2024-11-20 18:05:22.513253] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:04.158 [2024-11-20 18:05:22.513257] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:04.158 [2024-11-20 18:05:22.513270] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.513281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x183300 00:20:04.158 [2024-11-20 18:05:22.519868] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.158 [2024-11-20 18:05:22.519877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:04.158 [2024-11-20 18:05:22.519883] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.519890] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:04.158 [2024-11-20 18:05:22.519897] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:04.158 [2024-11-20 18:05:22.519902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:04.158 [2024-11-20 18:05:22.519915] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.519923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.158 [2024-11-20 18:05:22.519937] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.158 [2024-11-20 18:05:22.519944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:04.158 [2024-11-20 18:05:22.519950] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:04.158 [2024-11-20 18:05:22.519955] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.519961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:04.158 [2024-11-20 18:05:22.519967] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.519974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.158 [2024-11-20 18:05:22.519990] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.158 [2024-11-20 18:05:22.519995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:04.158 [2024-11-20 18:05:22.520001] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:04.158 [2024-11-20 18:05:22.520006] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.520012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:04.158 [2024-11-20 18:05:22.520019] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.520026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.158 [2024-11-20 18:05:22.520041] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.158 [2024-11-20 18:05:22.520046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:04.158 [2024-11-20 18:05:22.520052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:04.158 [2024-11-20 18:05:22.520057] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.520065] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.520072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.158 [2024-11-20 18:05:22.520090] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.158 [2024-11-20 18:05:22.520095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:04.158 [2024-11-20 18:05:22.520100] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:04.158 [2024-11-20 18:05:22.520105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:04.158 [2024-11-20 18:05:22.520109] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183300 00:20:04.158 [2024-11-20 18:05:22.520115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:04.159 [2024-11-20 18:05:22.520223] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:04.159 [2024-11-20 18:05:22.520228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:04.159 [2024-11-20 18:05:22.520236] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.159 [2024-11-20 18:05:22.520258] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.159 [2024-11-20 18:05:22.520263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:04.159 [2024-11-20 18:05:22.520268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:04.159 [2024-11-20 18:05:22.520273] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520281] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.159 [2024-11-20 18:05:22.520304] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.159 [2024-11-20 18:05:22.520308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:04.159 [2024-11-20 18:05:22.520314] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:04.159 [2024-11-20 18:05:22.520318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520323] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520329] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:04.159 [2024-11-20 18:05:22.520336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520345] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183300 00:20:04.159 [2024-11-20 18:05:22.520385] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.159 [2024-11-20 18:05:22.520390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:04.159 [2024-11-20 18:05:22.520397] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:04.159 [2024-11-20 18:05:22.520402] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:04.159 [2024-11-20 18:05:22.520406] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:04.159 [2024-11-20 18:05:22.520411] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:04.159 [2024-11-20 18:05:22.520415] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:04.159 [2024-11-20 18:05:22.520420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520425] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520441] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.159 [2024-11-20 18:05:22.520463] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.159 [2024-11-20 18:05:22.520468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:04.159 [2024-11-20 18:05:22.520475] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.159 [2024-11-20 18:05:22.520488] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.159 [2024-11-20 18:05:22.520500] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.159 [2024-11-20 18:05:22.520513] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.159 [2024-11-20 18:05:22.520523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520528] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520545] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.159 [2024-11-20 18:05:22.520565] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.159 [2024-11-20 18:05:22.520570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:04.159 [2024-11-20 18:05:22.520575] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:04.159 [2024-11-20 18:05:22.520580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520585] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520607] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.159 [2024-11-20 18:05:22.520631] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.159 [2024-11-20 18:05:22.520635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:20:04.159 [2024-11-20 18:05:22.520700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520705] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:04.159 [2024-11-20 18:05:22.520720] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.159 [2024-11-20 18:05:22.520727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183300 00:20:04.159 [2024-11-20 18:05:22.520747] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.159 [2024-11-20 18:05:22.520751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:04.159 [2024-11-20 18:05:22.520764] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:04.159 [2024-11-20 18:05:22.520773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520778] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.520785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520793] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.520799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183300 00:20:04.160 [2024-11-20 18:05:22.520825] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.520830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.520841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520847] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.520854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520867] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.520874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183300 00:20:04.160 [2024-11-20 18:05:22.520892] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.520897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.520904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520909] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.520916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520949] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:04.160 [2024-11-20 18:05:22.520953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:04.160 [2024-11-20 18:05:22.520959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:04.160 [2024-11-20 18:05:22.520972] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.520979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.160 [2024-11-20 18:05:22.520986] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.520993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.160 [2024-11-20 18:05:22.521002] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.521007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.521012] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521018] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.521023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.521028] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521036] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.160 [2024-11-20 18:05:22.521061] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.521065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.521071] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521079] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.160 [2024-11-20 18:05:22.521100] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.521105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.521110] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521119] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.160 [2024-11-20 18:05:22.521139] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.521144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.521150] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521162] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183300 00:20:04.160 [2024-11-20 18:05:22.521177] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183300 00:20:04.160 [2024-11-20 18:05:22.521192] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183300 00:20:04.160 [2024-11-20 18:05:22.521207] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183300 00:20:04.160 [2024-11-20 18:05:22.521221] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.521226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.521236] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521242] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.521246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.521256] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521261] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.521266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.521272] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183300 00:20:04.160 [2024-11-20 18:05:22.521277] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.160 [2024-11-20 18:05:22.521281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:04.160 [2024-11-20 18:05:22.521290] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183300 00:20:04.160 ===================================================== 00:20:04.160 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:04.160 ===================================================== 00:20:04.160 Controller Capabilities/Features 00:20:04.160 ================================ 00:20:04.160 Vendor ID: 8086 00:20:04.160 Subsystem Vendor ID: 8086 00:20:04.160 Serial Number: SPDK00000000000001 00:20:04.160 Model Number: SPDK bdev Controller 00:20:04.160 Firmware Version: 25.01 00:20:04.161 Recommended Arb Burst: 6 00:20:04.161 IEEE OUI Identifier: e4 d2 5c 00:20:04.161 Multi-path I/O 00:20:04.161 May have multiple subsystem ports: Yes 00:20:04.161 May have multiple controllers: Yes 00:20:04.161 Associated with SR-IOV VF: No 00:20:04.161 Max Data Transfer Size: 131072 00:20:04.161 Max Number of Namespaces: 32 00:20:04.161 Max Number of I/O Queues: 127 00:20:04.161 NVMe Specification Version (VS): 1.3 00:20:04.161 NVMe Specification Version (Identify): 1.3 00:20:04.161 Maximum Queue Entries: 128 00:20:04.161 Contiguous Queues Required: Yes 00:20:04.161 Arbitration Mechanisms Supported 00:20:04.161 Weighted Round Robin: Not Supported 00:20:04.161 Vendor Specific: Not Supported 00:20:04.161 Reset Timeout: 15000 ms 00:20:04.161 Doorbell Stride: 4 bytes 00:20:04.161 NVM Subsystem Reset: Not Supported 00:20:04.161 Command Sets Supported 00:20:04.161 NVM Command Set: Supported 00:20:04.161 Boot Partition: Not Supported 00:20:04.161 Memory Page Size Minimum: 4096 bytes 00:20:04.161 Memory Page Size Maximum: 4096 bytes 00:20:04.161 Persistent Memory Region: Not Supported 00:20:04.161 Optional Asynchronous Events Supported 00:20:04.161 Namespace Attribute Notices: Supported 00:20:04.161 Firmware Activation Notices: Not Supported 00:20:04.161 ANA Change Notices: Not Supported 00:20:04.161 PLE Aggregate Log Change Notices: Not Supported 00:20:04.161 LBA Status Info Alert Notices: Not Supported 00:20:04.161 EGE Aggregate Log Change Notices: Not Supported 00:20:04.161 Normal NVM Subsystem Shutdown event: Not Supported 00:20:04.161 Zone Descriptor Change Notices: Not Supported 00:20:04.161 Discovery Log Change Notices: Not Supported 00:20:04.161 Controller Attributes 00:20:04.161 128-bit Host Identifier: Supported 00:20:04.161 Non-Operational Permissive Mode: Not Supported 00:20:04.161 NVM Sets: Not Supported 00:20:04.161 Read Recovery Levels: Not Supported 00:20:04.161 Endurance Groups: Not Supported 00:20:04.161 Predictable Latency Mode: Not Supported 00:20:04.161 Traffic Based Keep ALive: Not Supported 00:20:04.161 Namespace Granularity: Not Supported 00:20:04.161 SQ Associations: Not Supported 00:20:04.161 UUID List: Not Supported 00:20:04.161 Multi-Domain Subsystem: Not Supported 00:20:04.161 Fixed Capacity Management: Not Supported 00:20:04.161 Variable Capacity Management: Not Supported 00:20:04.161 Delete Endurance Group: Not Supported 00:20:04.161 Delete NVM Set: Not Supported 00:20:04.161 Extended LBA Formats Supported: Not Supported 00:20:04.161 Flexible Data Placement Supported: Not Supported 00:20:04.161 00:20:04.161 Controller Memory Buffer Support 00:20:04.161 ================================ 00:20:04.161 Supported: No 00:20:04.161 00:20:04.161 Persistent Memory Region Support 00:20:04.161 ================================ 00:20:04.161 Supported: No 00:20:04.161 00:20:04.161 Admin Command Set Attributes 00:20:04.161 ============================ 00:20:04.161 Security Send/Receive: Not Supported 00:20:04.161 Format NVM: Not Supported 00:20:04.161 Firmware Activate/Download: Not Supported 00:20:04.161 Namespace Management: Not Supported 00:20:04.161 Device Self-Test: Not Supported 00:20:04.161 Directives: Not Supported 00:20:04.161 NVMe-MI: Not Supported 00:20:04.161 Virtualization Management: Not Supported 00:20:04.161 Doorbell Buffer Config: Not Supported 00:20:04.161 Get LBA Status Capability: Not Supported 00:20:04.161 Command & Feature Lockdown Capability: Not Supported 00:20:04.161 Abort Command Limit: 4 00:20:04.161 Async Event Request Limit: 4 00:20:04.161 Number of Firmware Slots: N/A 00:20:04.161 Firmware Slot 1 Read-Only: N/A 00:20:04.161 Firmware Activation Without Reset: N/A 00:20:04.161 Multiple Update Detection Support: N/A 00:20:04.161 Firmware Update Granularity: No Information Provided 00:20:04.161 Per-Namespace SMART Log: No 00:20:04.161 Asymmetric Namespace Access Log Page: Not Supported 00:20:04.161 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:04.161 Command Effects Log Page: Supported 00:20:04.161 Get Log Page Extended Data: Supported 00:20:04.161 Telemetry Log Pages: Not Supported 00:20:04.161 Persistent Event Log Pages: Not Supported 00:20:04.161 Supported Log Pages Log Page: May Support 00:20:04.161 Commands Supported & Effects Log Page: Not Supported 00:20:04.161 Feature Identifiers & Effects Log Page:May Support 00:20:04.161 NVMe-MI Commands & Effects Log Page: May Support 00:20:04.161 Data Area 4 for Telemetry Log: Not Supported 00:20:04.161 Error Log Page Entries Supported: 128 00:20:04.161 Keep Alive: Supported 00:20:04.161 Keep Alive Granularity: 10000 ms 00:20:04.161 00:20:04.161 NVM Command Set Attributes 00:20:04.161 ========================== 00:20:04.161 Submission Queue Entry Size 00:20:04.161 Max: 64 00:20:04.161 Min: 64 00:20:04.161 Completion Queue Entry Size 00:20:04.161 Max: 16 00:20:04.161 Min: 16 00:20:04.161 Number of Namespaces: 32 00:20:04.161 Compare Command: Supported 00:20:04.161 Write Uncorrectable Command: Not Supported 00:20:04.161 Dataset Management Command: Supported 00:20:04.161 Write Zeroes Command: Supported 00:20:04.161 Set Features Save Field: Not Supported 00:20:04.161 Reservations: Supported 00:20:04.161 Timestamp: Not Supported 00:20:04.161 Copy: Supported 00:20:04.161 Volatile Write Cache: Present 00:20:04.161 Atomic Write Unit (Normal): 1 00:20:04.161 Atomic Write Unit (PFail): 1 00:20:04.161 Atomic Compare & Write Unit: 1 00:20:04.161 Fused Compare & Write: Supported 00:20:04.161 Scatter-Gather List 00:20:04.161 SGL Command Set: Supported 00:20:04.161 SGL Keyed: Supported 00:20:04.161 SGL Bit Bucket Descriptor: Not Supported 00:20:04.161 SGL Metadata Pointer: Not Supported 00:20:04.161 Oversized SGL: Not Supported 00:20:04.161 SGL Metadata Address: Not Supported 00:20:04.161 SGL Offset: Supported 00:20:04.161 Transport SGL Data Block: Not Supported 00:20:04.161 Replay Protected Memory Block: Not Supported 00:20:04.161 00:20:04.161 Firmware Slot Information 00:20:04.161 ========================= 00:20:04.161 Active slot: 1 00:20:04.161 Slot 1 Firmware Revision: 25.01 00:20:04.161 00:20:04.161 00:20:04.161 Commands Supported and Effects 00:20:04.161 ============================== 00:20:04.161 Admin Commands 00:20:04.161 -------------- 00:20:04.161 Get Log Page (02h): Supported 00:20:04.161 Identify (06h): Supported 00:20:04.161 Abort (08h): Supported 00:20:04.161 Set Features (09h): Supported 00:20:04.161 Get Features (0Ah): Supported 00:20:04.161 Asynchronous Event Request (0Ch): Supported 00:20:04.161 Keep Alive (18h): Supported 00:20:04.161 I/O Commands 00:20:04.161 ------------ 00:20:04.161 Flush (00h): Supported LBA-Change 00:20:04.161 Write (01h): Supported LBA-Change 00:20:04.161 Read (02h): Supported 00:20:04.162 Compare (05h): Supported 00:20:04.162 Write Zeroes (08h): Supported LBA-Change 00:20:04.162 Dataset Management (09h): Supported LBA-Change 00:20:04.162 Copy (19h): Supported LBA-Change 00:20:04.162 00:20:04.162 Error Log 00:20:04.162 ========= 00:20:04.162 00:20:04.162 Arbitration 00:20:04.162 =========== 00:20:04.162 Arbitration Burst: 1 00:20:04.162 00:20:04.162 Power Management 00:20:04.162 ================ 00:20:04.162 Number of Power States: 1 00:20:04.162 Current Power State: Power State #0 00:20:04.162 Power State #0: 00:20:04.162 Max Power: 0.00 W 00:20:04.162 Non-Operational State: Operational 00:20:04.162 Entry Latency: Not Reported 00:20:04.162 Exit Latency: Not Reported 00:20:04.162 Relative Read Throughput: 0 00:20:04.162 Relative Read Latency: 0 00:20:04.162 Relative Write Throughput: 0 00:20:04.162 Relative Write Latency: 0 00:20:04.162 Idle Power: Not Reported 00:20:04.162 Active Power: Not Reported 00:20:04.162 Non-Operational Permissive Mode: Not Supported 00:20:04.162 00:20:04.162 Health Information 00:20:04.162 ================== 00:20:04.162 Critical Warnings: 00:20:04.162 Available Spare Space: OK 00:20:04.162 Temperature: OK 00:20:04.162 Device Reliability: OK 00:20:04.162 Read Only: No 00:20:04.162 Volatile Memory Backup: OK 00:20:04.162 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:04.162 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:04.162 Available Spare: 0% 00:20:04.162 Available Spare Threshold: 0% 00:20:04.162 Life Percentage [2024-11-20 18:05:22.521381] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521389] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.162 [2024-11-20 18:05:22.521407] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.162 [2024-11-20 18:05:22.521412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521418] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521445] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:04.162 [2024-11-20 18:05:22.521453] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45372 doesn't match qid 00:20:04.162 [2024-11-20 18:05:22.521466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32687 cdw0:bd0f3c50 sqhd:e320 p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521472] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45372 doesn't match qid 00:20:04.162 [2024-11-20 18:05:22.521479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32687 cdw0:bd0f3c50 sqhd:e320 p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521484] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45372 doesn't match qid 00:20:04.162 [2024-11-20 18:05:22.521490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32687 cdw0:bd0f3c50 sqhd:e320 p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521496] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45372 doesn't match qid 00:20:04.162 [2024-11-20 18:05:22.521502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32687 cdw0:bd0f3c50 sqhd:e320 p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521510] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.162 [2024-11-20 18:05:22.521534] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.162 [2024-11-20 18:05:22.521539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521546] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.162 [2024-11-20 18:05:22.521558] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521570] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.162 [2024-11-20 18:05:22.521575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521581] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:04.162 [2024-11-20 18:05:22.521585] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:04.162 [2024-11-20 18:05:22.521590] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521598] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.162 [2024-11-20 18:05:22.521620] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.162 [2024-11-20 18:05:22.521625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521631] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521639] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.162 [2024-11-20 18:05:22.521662] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.162 [2024-11-20 18:05:22.521669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521674] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521683] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.162 [2024-11-20 18:05:22.521708] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.162 [2024-11-20 18:05:22.521713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521719] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521727] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.162 [2024-11-20 18:05:22.521748] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.162 [2024-11-20 18:05:22.521752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521758] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521767] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.162 [2024-11-20 18:05:22.521788] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.162 [2024-11-20 18:05:22.521792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521798] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521806] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.162 [2024-11-20 18:05:22.521827] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.162 [2024-11-20 18:05:22.521832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521838] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521847] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.162 [2024-11-20 18:05:22.521854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.162 [2024-11-20 18:05:22.521869] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.162 [2024-11-20 18:05:22.521874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:04.162 [2024-11-20 18:05:22.521879] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.521888] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.521895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.521909] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.521915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.521921] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.521929] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.521936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.521954] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.521959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.521964] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.521973] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.521980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.521995] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.522000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.522006] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522014] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.522035] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.522039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.522045] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522054] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.522074] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.522078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.522084] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522093] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.522117] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.522122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.522127] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522136] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.522158] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.522162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.522168] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522176] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.522197] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.522202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.522207] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522216] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.522238] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.522243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.522249] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522257] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.522282] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.522286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.522292] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522301] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.163 [2024-11-20 18:05:22.522321] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.163 [2024-11-20 18:05:22.522325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:04.163 [2024-11-20 18:05:22.522331] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522340] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.163 [2024-11-20 18:05:22.522347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522362] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522372] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522381] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522402] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522413] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522421] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522444] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522454] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522462] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522485] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522495] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522504] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522526] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522536] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522545] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522565] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522575] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522584] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522606] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522616] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522625] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522649] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522659] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522668] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522694] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522704] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522713] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522733] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522743] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522752] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522772] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522783] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522791] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522811] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522821] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522830] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522850] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522861] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522873] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522895] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522905] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522913] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.164 [2024-11-20 18:05:22.522940] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.164 [2024-11-20 18:05:22.522945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:04.164 [2024-11-20 18:05:22.522950] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522959] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.164 [2024-11-20 18:05:22.522966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.522981] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.522986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.522991] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523000] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523020] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523030] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523039] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523061] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523072] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523080] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523100] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523111] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523121] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523147] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523158] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523166] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523189] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523199] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523207] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523230] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523240] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523249] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523269] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523279] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523288] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523308] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523318] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523327] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523351] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523361] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523371] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523394] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523404] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523413] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523433] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523443] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523452] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523474] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523484] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523493] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523513] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.165 [2024-11-20 18:05:22.523518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:04.165 [2024-11-20 18:05:22.523524] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523532] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.165 [2024-11-20 18:05:22.523539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.165 [2024-11-20 18:05:22.523557] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.166 [2024-11-20 18:05:22.523561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:04.166 [2024-11-20 18:05:22.523567] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523576] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.166 [2024-11-20 18:05:22.523600] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.166 [2024-11-20 18:05:22.523605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:04.166 [2024-11-20 18:05:22.523612] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523620] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.166 [2024-11-20 18:05:22.523644] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.166 [2024-11-20 18:05:22.523649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:04.166 [2024-11-20 18:05:22.523655] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523663] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.166 [2024-11-20 18:05:22.523684] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.166 [2024-11-20 18:05:22.523688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:04.166 [2024-11-20 18:05:22.523694] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523703] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.166 [2024-11-20 18:05:22.523725] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.166 [2024-11-20 18:05:22.523730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:04.166 [2024-11-20 18:05:22.523735] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523744] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.166 [2024-11-20 18:05:22.523766] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.166 [2024-11-20 18:05:22.523771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:04.166 [2024-11-20 18:05:22.523776] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523785] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.166 [2024-11-20 18:05:22.523807] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.166 [2024-11-20 18:05:22.523812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:04.166 [2024-11-20 18:05:22.523817] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523826] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.523833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.166 [2024-11-20 18:05:22.523848] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.166 [2024-11-20 18:05:22.523853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:04.166 [2024-11-20 18:05:22.523859] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.527875] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.527883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:04.166 [2024-11-20 18:05:22.527897] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:04.166 [2024-11-20 18:05:22.527901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0012 p:0 m:0 dnr:0 00:20:04.166 [2024-11-20 18:05:22.527907] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183300 00:20:04.166 [2024-11-20 18:05:22.527913] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:20:04.166 Used: 0% 00:20:04.166 Data Units Read: 0 00:20:04.166 Data Units Written: 0 00:20:04.166 Host Read Commands: 0 00:20:04.166 Host Write Commands: 0 00:20:04.166 Controller Busy Time: 0 minutes 00:20:04.166 Power Cycles: 0 00:20:04.166 Power On Hours: 0 hours 00:20:04.166 Unsafe Shutdowns: 0 00:20:04.166 Unrecoverable Media Errors: 0 00:20:04.166 Lifetime Error Log Entries: 0 00:20:04.166 Warning Temperature Time: 0 minutes 00:20:04.166 Critical Temperature Time: 0 minutes 00:20:04.166 00:20:04.166 Number of Queues 00:20:04.166 ================ 00:20:04.166 Number of I/O Submission Queues: 127 00:20:04.166 Number of I/O Completion Queues: 127 00:20:04.166 00:20:04.166 Active Namespaces 00:20:04.166 ================= 00:20:04.166 Namespace ID:1 00:20:04.166 Error Recovery Timeout: Unlimited 00:20:04.166 Command Set Identifier: NVM (00h) 00:20:04.166 Deallocate: Supported 00:20:04.166 Deallocated/Unwritten Error: Not Supported 00:20:04.166 Deallocated Read Value: Unknown 00:20:04.166 Deallocate in Write Zeroes: Not Supported 00:20:04.166 Deallocated Guard Field: 0xFFFF 00:20:04.166 Flush: Supported 00:20:04.166 Reservation: Supported 00:20:04.166 Namespace Sharing Capabilities: Multiple Controllers 00:20:04.166 Size (in LBAs): 131072 (0GiB) 00:20:04.166 Capacity (in LBAs): 131072 (0GiB) 00:20:04.166 Utilization (in LBAs): 131072 (0GiB) 00:20:04.166 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:04.166 EUI64: ABCDEF0123456789 00:20:04.166 UUID: 9f61e4fd-f090-4bf3-9ebe-fcaa50494294 00:20:04.166 Thin Provisioning: Not Supported 00:20:04.166 Per-NS Atomic Units: Yes 00:20:04.166 Atomic Boundary Size (Normal): 0 00:20:04.166 Atomic Boundary Size (PFail): 0 00:20:04.166 Atomic Boundary Offset: 0 00:20:04.166 Maximum Single Source Range Length: 65535 00:20:04.166 Maximum Copy Length: 65535 00:20:04.166 Maximum Source Range Count: 1 00:20:04.166 NGUID/EUI64 Never Reused: No 00:20:04.166 Namespace Write Protected: No 00:20:04.166 Number of LBA Formats: 1 00:20:04.166 Current LBA Format: LBA Format #00 00:20:04.166 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:04.166 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:04.166 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:04.167 rmmod nvme_rdma 00:20:04.167 rmmod nvme_fabrics 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3608787 ']' 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3608787 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3608787 ']' 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3608787 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.167 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608787 00:20:04.427 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.427 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.427 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608787' 00:20:04.427 killing process with pid 3608787 00:20:04.427 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3608787 00:20:04.427 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3608787 00:20:04.427 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:04.427 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:04.427 00:20:04.427 real 0m10.293s 00:20:04.427 user 0m9.414s 00:20:04.427 sys 0m6.567s 00:20:04.427 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.427 18:05:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.427 ************************************ 00:20:04.427 END TEST nvmf_identify 00:20:04.427 ************************************ 00:20:04.689 18:05:22 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:04.689 18:05:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:04.689 18:05:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.689 18:05:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.689 ************************************ 00:20:04.689 START TEST nvmf_perf 00:20:04.689 ************************************ 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:04.689 * Looking for test storage... 00:20:04.689 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:04.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.689 --rc genhtml_branch_coverage=1 00:20:04.689 --rc genhtml_function_coverage=1 00:20:04.689 --rc genhtml_legend=1 00:20:04.689 --rc geninfo_all_blocks=1 00:20:04.689 --rc geninfo_unexecuted_blocks=1 00:20:04.689 00:20:04.689 ' 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:04.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.689 --rc genhtml_branch_coverage=1 00:20:04.689 --rc genhtml_function_coverage=1 00:20:04.689 --rc genhtml_legend=1 00:20:04.689 --rc geninfo_all_blocks=1 00:20:04.689 --rc geninfo_unexecuted_blocks=1 00:20:04.689 00:20:04.689 ' 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:04.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.689 --rc genhtml_branch_coverage=1 00:20:04.689 --rc genhtml_function_coverage=1 00:20:04.689 --rc genhtml_legend=1 00:20:04.689 --rc geninfo_all_blocks=1 00:20:04.689 --rc geninfo_unexecuted_blocks=1 00:20:04.689 00:20:04.689 ' 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:04.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.689 --rc genhtml_branch_coverage=1 00:20:04.689 --rc genhtml_function_coverage=1 00:20:04.689 --rc genhtml_legend=1 00:20:04.689 --rc geninfo_all_blocks=1 00:20:04.689 --rc geninfo_unexecuted_blocks=1 00:20:04.689 00:20:04.689 ' 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.689 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.690 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.690 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.951 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:04.951 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:04.951 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:04.951 18:05:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:13.090 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.090 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:13.090 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:13.090 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:20:13.091 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:20:13.091 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:20:13.091 Found net devices under 0000:98:00.0: mlx_0_0 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:20:13.091 Found net devices under 0000:98:00.1: mlx_0_1 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:13.091 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:13.091 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:13.091 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:20:13.091 altname enp152s0f0np0 00:20:13.092 altname ens817f0np0 00:20:13.092 inet 192.168.100.8/24 scope global mlx_0_0 00:20:13.092 valid_lft forever preferred_lft forever 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:13.092 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:13.092 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:20:13.092 altname enp152s0f1np1 00:20:13.092 altname ens817f1np1 00:20:13.092 inet 192.168.100.9/24 scope global mlx_0_1 00:20:13.092 valid_lft forever preferred_lft forever 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:13.092 192.168.100.9' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:13.092 192.168.100.9' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:13.092 192.168.100.9' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:13.092 18:05:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3613379 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3613379 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3613379 ']' 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.092 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:13.092 [2024-11-20 18:05:31.071705] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:20:13.092 [2024-11-20 18:05:31.071778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.092 [2024-11-20 18:05:31.157810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.092 [2024-11-20 18:05:31.199955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.092 [2024-11-20 18:05:31.199996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.092 [2024-11-20 18:05:31.200005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.092 [2024-11-20 18:05:31.200012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.092 [2024-11-20 18:05:31.200018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.092 [2024-11-20 18:05:31.201704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.092 [2024-11-20 18:05:31.201845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.092 [2024-11-20 18:05:31.202008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.092 [2024-11-20 18:05:31.202009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.352 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.353 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:20:13.353 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.353 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:13.353 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:13.612 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.612 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:13.612 18:05:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:14.183 18:05:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:14.183 18:05:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:14.183 18:05:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:20:14.183 18:05:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.444 18:05:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:14.444 18:05:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:20:14.444 18:05:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:14.444 18:05:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:20:14.444 18:05:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:20:14.444 [2024-11-20 18:05:32.973019] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:20:14.705 [2024-11-20 18:05:32.999955] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14996d0/0x136f2e0) succeed. 00:20:14.705 [2024-11-20 18:05:33.013015] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x149ab80/0x13eefc0) succeed. 00:20:14.705 18:05:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.965 18:05:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:14.965 18:05:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:15.226 18:05:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:15.226 18:05:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:15.226 18:05:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:15.487 [2024-11-20 18:05:33.844650] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:15.487 18:05:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:15.748 18:05:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:20:15.748 18:05:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:20:15.748 18:05:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:15.749 18:05:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:20:17.131 Initializing NVMe Controllers 00:20:17.131 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:20:17.131 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:20:17.131 Initialization complete. Launching workers. 00:20:17.131 ======================================================== 00:20:17.131 Latency(us) 00:20:17.131 Device Information : IOPS MiB/s Average min max 00:20:17.131 PCIE (0000:65:00.0) NSID 1 from core 0: 79049.75 308.79 404.07 13.28 5467.02 00:20:17.131 ======================================================== 00:20:17.131 Total : 79049.75 308.79 404.07 13.28 5467.02 00:20:17.131 00:20:17.131 18:05:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:20.431 Initializing NVMe Controllers 00:20:20.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:20.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:20.431 Initialization complete. Launching workers. 00:20:20.431 ======================================================== 00:20:20.431 Latency(us) 00:20:20.431 Device Information : IOPS MiB/s Average min max 00:20:20.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9573.12 37.39 104.10 37.60 4071.59 00:20:20.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7121.05 27.82 140.06 55.05 4103.84 00:20:20.431 ======================================================== 00:20:20.431 Total : 16694.17 65.21 119.44 37.60 4103.84 00:20:20.431 00:20:20.431 18:05:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:23.727 Initializing NVMe Controllers 00:20:23.727 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.727 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:23.727 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:23.727 Initialization complete. Launching workers. 00:20:23.727 ======================================================== 00:20:23.727 Latency(us) 00:20:23.727 Device Information : IOPS MiB/s Average min max 00:20:23.727 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19970.70 78.01 1601.84 403.95 5350.18 00:20:23.727 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4047.72 15.81 7960.09 6424.35 8308.41 00:20:23.727 ======================================================== 00:20:23.727 Total : 24018.42 93.82 2673.37 403.95 8308.41 00:20:23.727 00:20:23.986 18:05:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:20:23.986 18:05:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:28.185 Initializing NVMe Controllers 00:20:28.185 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.185 Controller IO queue size 128, less than required. 00:20:28.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.185 Controller IO queue size 128, less than required. 00:20:28.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.185 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:28.185 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:28.185 Initialization complete. Launching workers. 00:20:28.185 ======================================================== 00:20:28.185 Latency(us) 00:20:28.185 Device Information : IOPS MiB/s Average min max 00:20:28.185 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4985.01 1246.25 25676.19 10258.89 70694.76 00:20:28.185 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5016.44 1254.11 25277.64 11612.11 46747.44 00:20:28.185 ======================================================== 00:20:28.185 Total : 10001.45 2500.36 25476.29 10258.89 70694.76 00:20:28.185 00:20:28.445 18:05:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:20:28.705 No valid NVMe controllers or AIO or URING devices found 00:20:28.705 Initializing NVMe Controllers 00:20:28.705 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.705 Controller IO queue size 128, less than required. 00:20:28.705 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.705 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:28.705 Controller IO queue size 128, less than required. 00:20:28.705 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.705 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:28.705 WARNING: Some requested NVMe devices were skipped 00:20:28.705 18:05:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:20:33.984 Initializing NVMe Controllers 00:20:33.984 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.984 Controller IO queue size 128, less than required. 00:20:33.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:33.984 Controller IO queue size 128, less than required. 00:20:33.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:33.984 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:33.984 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:33.984 Initialization complete. Launching workers. 00:20:33.984 00:20:33.984 ==================== 00:20:33.984 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:33.984 RDMA transport: 00:20:33.984 dev name: mlx5_0 00:20:33.984 polls: 265652 00:20:33.984 idle_polls: 261540 00:20:33.984 completions: 54262 00:20:33.984 queued_requests: 1 00:20:33.984 total_send_wrs: 27131 00:20:33.984 send_doorbell_updates: 3663 00:20:33.984 total_recv_wrs: 27258 00:20:33.984 recv_doorbell_updates: 3665 00:20:33.984 --------------------------------- 00:20:33.984 00:20:33.984 ==================== 00:20:33.984 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:33.984 RDMA transport: 00:20:33.984 dev name: mlx5_0 00:20:33.985 polls: 270096 00:20:33.985 idle_polls: 269838 00:20:33.985 completions: 17634 00:20:33.985 queued_requests: 1 00:20:33.985 total_send_wrs: 8817 00:20:33.985 send_doorbell_updates: 249 00:20:33.985 total_recv_wrs: 8944 00:20:33.985 recv_doorbell_updates: 250 00:20:33.985 --------------------------------- 00:20:33.985 ======================================================== 00:20:33.985 Latency(us) 00:20:33.985 Device Information : IOPS MiB/s Average min max 00:20:33.985 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6782.50 1695.62 18901.64 8517.87 55838.11 00:20:33.985 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2204.00 551.00 58093.02 34912.68 90997.91 00:20:33.985 ======================================================== 00:20:33.985 Total : 8986.50 2246.62 28513.59 8517.87 90997.91 00:20:33.985 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:33.985 rmmod nvme_rdma 00:20:33.985 rmmod nvme_fabrics 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3613379 ']' 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3613379 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3613379 ']' 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3613379 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3613379 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3613379' 00:20:33.985 killing process with pid 3613379 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3613379 00:20:33.985 18:05:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3613379 00:20:35.895 18:05:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:35.895 18:05:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:35.895 00:20:35.895 real 0m30.934s 00:20:35.895 user 1m34.223s 00:20:35.895 sys 0m7.050s 00:20:35.895 18:05:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.895 18:05:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:35.895 ************************************ 00:20:35.895 END TEST nvmf_perf 00:20:35.895 ************************************ 00:20:35.895 18:05:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:20:35.895 18:05:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:35.895 18:05:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.895 18:05:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.895 ************************************ 00:20:35.895 START TEST nvmf_fio_host 00:20:35.895 ************************************ 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:20:35.895 * Looking for test storage... 00:20:35.895 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:35.895 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:35.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.896 --rc genhtml_branch_coverage=1 00:20:35.896 --rc genhtml_function_coverage=1 00:20:35.896 --rc genhtml_legend=1 00:20:35.896 --rc geninfo_all_blocks=1 00:20:35.896 --rc geninfo_unexecuted_blocks=1 00:20:35.896 00:20:35.896 ' 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:35.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.896 --rc genhtml_branch_coverage=1 00:20:35.896 --rc genhtml_function_coverage=1 00:20:35.896 --rc genhtml_legend=1 00:20:35.896 --rc geninfo_all_blocks=1 00:20:35.896 --rc geninfo_unexecuted_blocks=1 00:20:35.896 00:20:35.896 ' 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:35.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.896 --rc genhtml_branch_coverage=1 00:20:35.896 --rc genhtml_function_coverage=1 00:20:35.896 --rc genhtml_legend=1 00:20:35.896 --rc geninfo_all_blocks=1 00:20:35.896 --rc geninfo_unexecuted_blocks=1 00:20:35.896 00:20:35.896 ' 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:35.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.896 --rc genhtml_branch_coverage=1 00:20:35.896 --rc genhtml_function_coverage=1 00:20:35.896 --rc genhtml_legend=1 00:20:35.896 --rc geninfo_all_blocks=1 00:20:35.896 --rc geninfo_unexecuted_blocks=1 00:20:35.896 00:20:35.896 ' 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.896 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.897 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.897 18:05:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:20:44.034 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:20:44.034 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:20:44.034 Found net devices under 0000:98:00.0: mlx_0_0 00:20:44.034 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:20:44.035 Found net devices under 0000:98:00.1: mlx_0_1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:44.035 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:44.035 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:20:44.035 altname enp152s0f0np0 00:20:44.035 altname ens817f0np0 00:20:44.035 inet 192.168.100.8/24 scope global mlx_0_0 00:20:44.035 valid_lft forever preferred_lft forever 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:44.035 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:44.035 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:20:44.035 altname enp152s0f1np1 00:20:44.035 altname ens817f1np1 00:20:44.035 inet 192.168.100.9/24 scope global mlx_0_1 00:20:44.035 valid_lft forever preferred_lft forever 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:44.035 192.168.100.9' 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:44.035 192.168.100.9' 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:44.035 192.168.100.9' 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:44.035 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3622233 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3622233 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3622233 ']' 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.036 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.296 [2024-11-20 18:06:02.591358] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:20:44.296 [2024-11-20 18:06:02.591417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.296 [2024-11-20 18:06:02.664662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.296 [2024-11-20 18:06:02.701583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.296 [2024-11-20 18:06:02.701616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.296 [2024-11-20 18:06:02.701624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.296 [2024-11-20 18:06:02.701631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.296 [2024-11-20 18:06:02.701640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.296 [2024-11-20 18:06:02.703340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.296 [2024-11-20 18:06:02.703462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.296 [2024-11-20 18:06:02.703667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.296 [2024-11-20 18:06:02.703667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.296 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.296 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:20:44.296 18:06:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:44.557 [2024-11-20 18:06:02.973185] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1119f30/0x111e420) succeed. 00:20:44.557 [2024-11-20 18:06:02.987949] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x111b5c0/0x115fac0) succeed. 00:20:44.816 18:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:44.816 18:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.816 18:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.816 18:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:45.076 Malloc1 00:20:45.076 18:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.076 18:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:45.336 18:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:45.596 [2024-11-20 18:06:03.890565] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:45.596 18:06:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:45.596 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.877 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:45.877 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:45.877 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:45.877 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:45.877 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:45.877 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:45.877 18:06:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:46.144 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:46.144 fio-3.35 00:20:46.144 Starting 1 thread 00:20:48.708 00:20:48.708 test: (groupid=0, jobs=1): err= 0: pid=3622884: Wed Nov 20 18:06:06 2024 00:20:48.708 read: IOPS=16.4k, BW=64.1MiB/s (67.2MB/s)(128MiB/2003msec) 00:20:48.708 slat (nsec): min=2021, max=36310, avg=2110.43, stdev=543.76 00:20:48.708 clat (usec): min=2640, max=5634, avg=3876.63, stdev=695.00 00:20:48.708 lat (usec): min=2666, max=5636, avg=3878.74, stdev=695.02 00:20:48.708 clat percentiles (usec): 00:20:48.708 | 1.00th=[ 2900], 5.00th=[ 3097], 10.00th=[ 3097], 20.00th=[ 3130], 00:20:48.708 | 30.00th=[ 3130], 40.00th=[ 3130], 50.00th=[ 4490], 60.00th=[ 4490], 00:20:48.708 | 70.00th=[ 4490], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4555], 00:20:48.708 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 4948], 99.95th=[ 5145], 00:20:48.708 | 99.99th=[ 5604] 00:20:48.708 bw ( KiB/s): min=55584, max=81224, per=99.96%, avg=65622.00, stdev=11966.15, samples=4 00:20:48.708 iops : min=13896, max=20306, avg=16405.50, stdev=2991.54, samples=4 00:20:48.708 write: IOPS=16.4k, BW=64.2MiB/s (67.3MB/s)(129MiB/2003msec); 0 zone resets 00:20:48.708 slat (nsec): min=2072, max=12662, avg=2180.50, stdev=540.64 00:20:48.708 clat (usec): min=2681, max=5642, avg=3875.66, stdev=694.14 00:20:48.708 lat (usec): min=2693, max=5645, avg=3877.84, stdev=694.17 00:20:48.708 clat percentiles (usec): 00:20:48.708 | 1.00th=[ 2900], 5.00th=[ 3097], 10.00th=[ 3097], 20.00th=[ 3097], 00:20:48.708 | 30.00th=[ 3130], 40.00th=[ 3130], 50.00th=[ 4490], 60.00th=[ 4490], 00:20:48.708 | 70.00th=[ 4490], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4555], 00:20:48.708 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 4948], 99.95th=[ 5211], 00:20:48.708 | 99.99th=[ 5538] 00:20:48.708 bw ( KiB/s): min=55856, max=82720, per=99.96%, avg=65732.00, stdev=12522.39, samples=4 00:20:48.708 iops : min=13964, max=20680, avg=16433.00, stdev=3130.60, samples=4 00:20:48.708 lat (msec) : 4=45.30%, 10=54.70% 00:20:48.708 cpu : usr=99.45%, sys=0.10%, ctx=16, majf=0, minf=21 00:20:48.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:48.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:48.708 issued rwts: total=32875,32929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:48.708 00:20:48.708 Run status group 0 (all jobs): 00:20:48.708 READ: bw=64.1MiB/s (67.2MB/s), 64.1MiB/s-64.1MiB/s (67.2MB/s-67.2MB/s), io=128MiB (135MB), run=2003-2003msec 00:20:48.708 WRITE: bw=64.2MiB/s (67.3MB/s), 64.2MiB/s-64.2MiB/s (67.3MB/s-67.3MB/s), io=129MiB (135MB), run=2003-2003msec 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:48.708 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:48.709 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:48.709 18:06:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:48.970 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:48.970 fio-3.35 00:20:48.970 Starting 1 thread 00:20:51.678 00:20:51.678 test: (groupid=0, jobs=1): err= 0: pid=3623462: Wed Nov 20 18:06:09 2024 00:20:51.678 read: IOPS=13.8k, BW=216MiB/s (227MB/s)(427MiB/1972msec) 00:20:51.678 slat (nsec): min=3365, max=51564, avg=3617.43, stdev=1213.60 00:20:51.678 clat (usec): min=334, max=10531, avg=3408.66, stdev=1915.16 00:20:51.678 lat (usec): min=338, max=10558, avg=3412.28, stdev=1915.36 00:20:51.678 clat percentiles (usec): 00:20:51.678 | 1.00th=[ 922], 5.00th=[ 1090], 10.00th=[ 1237], 20.00th=[ 1549], 00:20:51.678 | 30.00th=[ 1876], 40.00th=[ 2343], 50.00th=[ 3032], 60.00th=[ 3752], 00:20:51.678 | 70.00th=[ 4490], 80.00th=[ 5211], 90.00th=[ 6259], 95.00th=[ 6783], 00:20:51.678 | 99.00th=[ 8160], 99.50th=[ 8455], 99.90th=[ 8717], 99.95th=[ 9372], 00:20:51.678 | 99.99th=[10421] 00:20:51.678 bw ( KiB/s): min=99232, max=114208, per=48.55%, avg=107560.00, stdev=7688.19, samples=4 00:20:51.678 iops : min= 6202, max= 7138, avg=6722.50, stdev=480.51, samples=4 00:20:51.678 write: IOPS=7685, BW=120MiB/s (126MB/s)(219MiB/1824msec); 0 zone resets 00:20:51.678 slat (usec): min=39, max=142, avg=40.79, stdev= 6.80 00:20:51.679 clat (usec): min=347, max=23256, avg=9913.43, stdev=5349.70 00:20:51.679 lat (usec): min=387, max=23295, avg=9954.22, stdev=5349.83 00:20:51.679 clat percentiles (usec): 00:20:51.679 | 1.00th=[ 2147], 5.00th=[ 2900], 10.00th=[ 3425], 20.00th=[ 4490], 00:20:51.679 | 30.00th=[ 5604], 40.00th=[ 7046], 50.00th=[ 8455], 60.00th=[12387], 00:20:51.679 | 70.00th=[14484], 80.00th=[15664], 90.00th=[16909], 95.00th=[17957], 00:20:51.679 | 99.00th=[20317], 99.50th=[20841], 99.90th=[22676], 99.95th=[23200], 00:20:51.679 | 99.99th=[23200] 00:20:51.679 bw ( KiB/s): min=105792, max=115904, per=90.95%, avg=111840.00, stdev=4755.49, samples=4 00:20:51.679 iops : min= 6612, max= 7244, avg=6990.00, stdev=297.22, samples=4 00:20:51.679 lat (usec) : 500=0.03%, 750=0.10%, 1000=1.53% 00:20:51.679 lat (msec) : 2=20.29%, 4=25.21%, 10=37.77%, 20=14.61%, 50=0.45% 00:20:51.679 cpu : usr=96.80%, sys=1.10%, ctx=184, majf=0, minf=37 00:20:51.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:20:51.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.679 issued rwts: total=27305,14019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.679 00:20:51.679 Run status group 0 (all jobs): 00:20:51.679 READ: bw=216MiB/s (227MB/s), 216MiB/s-216MiB/s (227MB/s-227MB/s), io=427MiB (447MB), run=1972-1972msec 00:20:51.679 WRITE: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=219MiB (230MB), run=1824-1824msec 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:51.679 rmmod nvme_rdma 00:20:51.679 rmmod nvme_fabrics 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3622233 ']' 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3622233 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3622233 ']' 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3622233 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.679 18:06:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3622233 00:20:51.679 18:06:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.679 18:06:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.679 18:06:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3622233' 00:20:51.679 killing process with pid 3622233 00:20:51.679 18:06:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3622233 00:20:51.679 18:06:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3622233 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:51.940 00:20:51.940 real 0m16.229s 00:20:51.940 user 1m6.305s 00:20:51.940 sys 0m7.183s 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.940 ************************************ 00:20:51.940 END TEST nvmf_fio_host 00:20:51.940 ************************************ 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.940 ************************************ 00:20:51.940 START TEST nvmf_failover 00:20:51.940 ************************************ 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:51.940 * Looking for test storage... 00:20:51.940 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:20:51.940 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.201 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:52.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.201 --rc genhtml_branch_coverage=1 00:20:52.201 --rc genhtml_function_coverage=1 00:20:52.201 --rc genhtml_legend=1 00:20:52.201 --rc geninfo_all_blocks=1 00:20:52.201 --rc geninfo_unexecuted_blocks=1 00:20:52.201 00:20:52.201 ' 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.202 --rc genhtml_branch_coverage=1 00:20:52.202 --rc genhtml_function_coverage=1 00:20:52.202 --rc genhtml_legend=1 00:20:52.202 --rc geninfo_all_blocks=1 00:20:52.202 --rc geninfo_unexecuted_blocks=1 00:20:52.202 00:20:52.202 ' 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.202 --rc genhtml_branch_coverage=1 00:20:52.202 --rc genhtml_function_coverage=1 00:20:52.202 --rc genhtml_legend=1 00:20:52.202 --rc geninfo_all_blocks=1 00:20:52.202 --rc geninfo_unexecuted_blocks=1 00:20:52.202 00:20:52.202 ' 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.202 --rc genhtml_branch_coverage=1 00:20:52.202 --rc genhtml_function_coverage=1 00:20:52.202 --rc genhtml_legend=1 00:20:52.202 --rc geninfo_all_blocks=1 00:20:52.202 --rc geninfo_unexecuted_blocks=1 00:20:52.202 00:20:52.202 ' 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:52.202 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.202 18:06:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:00.341 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:00.341 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:00.341 Found net devices under 0000:98:00.0: mlx_0_0 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:00.341 Found net devices under 0000:98:00.1: mlx_0_1 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:00.341 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:00.342 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:00.342 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:00.342 altname enp152s0f0np0 00:21:00.342 altname ens817f0np0 00:21:00.342 inet 192.168.100.8/24 scope global mlx_0_0 00:21:00.342 valid_lft forever preferred_lft forever 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:00.342 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:00.342 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:00.342 altname enp152s0f1np1 00:21:00.342 altname ens817f1np1 00:21:00.342 inet 192.168.100.9/24 scope global mlx_0_1 00:21:00.342 valid_lft forever preferred_lft forever 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:00.342 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:00.603 192.168.100.9' 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:00.603 192.168.100.9' 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:00.603 192.168.100.9' 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3628844 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3628844 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3628844 ']' 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.603 18:06:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:00.603 [2024-11-20 18:06:18.998416] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:21:00.603 [2024-11-20 18:06:18.998487] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.603 [2024-11-20 18:06:19.099649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:00.862 [2024-11-20 18:06:19.152225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.862 [2024-11-20 18:06:19.152277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.862 [2024-11-20 18:06:19.152291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.862 [2024-11-20 18:06:19.152298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.862 [2024-11-20 18:06:19.152304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.862 [2024-11-20 18:06:19.154389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.862 [2024-11-20 18:06:19.154553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.862 [2024-11-20 18:06:19.154554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.432 18:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.432 18:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:01.432 18:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.432 18:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.432 18:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:01.432 18:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.432 18:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:01.692 [2024-11-20 18:06:20.037331] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1836650/0x183ab40) succeed. 00:21:01.692 [2024-11-20 18:06:20.052211] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1837c40/0x187c1e0) succeed. 00:21:01.692 18:06:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:01.953 Malloc0 00:21:01.953 18:06:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.213 18:06:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:02.473 18:06:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:02.473 [2024-11-20 18:06:20.897137] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:02.473 18:06:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:02.733 [2024-11-20 18:06:21.073323] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:02.734 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:02.734 [2024-11-20 18:06:21.249890] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:02.996 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3629215 00:21:02.996 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:02.996 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.996 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3629215 /var/tmp/bdevperf.sock 00:21:02.996 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3629215 ']' 00:21:02.996 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.996 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.996 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.996 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.996 18:06:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:03.941 18:06:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.941 18:06:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:03.941 18:06:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:03.941 NVMe0n1 00:21:03.941 18:06:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:04.202 00:21:04.202 18:06:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.202 18:06:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3629549 00:21:04.202 18:06:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:05.141 18:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:05.401 18:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:08.697 18:06:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:08.697 00:21:08.697 18:06:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:08.957 18:06:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:12.273 18:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:12.273 [2024-11-20 18:06:30.485878] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:12.273 18:06:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:13.216 18:06:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:13.216 18:06:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3629549 00:21:19.805 { 00:21:19.805 "results": [ 00:21:19.805 { 00:21:19.805 "job": "NVMe0n1", 00:21:19.805 "core_mask": "0x1", 00:21:19.805 "workload": "verify", 00:21:19.805 "status": "finished", 00:21:19.805 "verify_range": { 00:21:19.805 "start": 0, 00:21:19.805 "length": 16384 00:21:19.805 }, 00:21:19.805 "queue_depth": 128, 00:21:19.805 "io_size": 4096, 00:21:19.805 "runtime": 15.00564, 00:21:19.805 "iops": 13021.703839356402, 00:21:19.805 "mibps": 50.866030622485944, 00:21:19.805 "io_failed": 4316, 00:21:19.805 "io_timeout": 0, 00:21:19.805 "avg_latency_us": 9586.972856053208, 00:21:19.805 "min_latency_us": 348.16, 00:21:19.805 "max_latency_us": 1034594.9866666667 00:21:19.805 } 00:21:19.805 ], 00:21:19.805 "core_count": 1 00:21:19.805 } 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3629215 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3629215 ']' 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3629215 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3629215 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3629215' 00:21:19.805 killing process with pid 3629215 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3629215 00:21:19.805 18:06:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3629215 00:21:19.805 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:19.805 [2024-11-20 18:06:21.332688] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:21:19.805 [2024-11-20 18:06:21.332747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629215 ] 00:21:19.805 [2024-11-20 18:06:21.410308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.805 [2024-11-20 18:06:21.446179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.805 Running I/O for 15 seconds... 00:21:19.805 16624.00 IOPS, 64.94 MiB/s [2024-11-20T17:06:38.341Z] 8889.00 IOPS, 34.72 MiB/s [2024-11-20T17:06:38.341Z] [2024-11-20 18:06:24.816016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182b00 00:21:19.805 [2024-11-20 18:06:24.816055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.805 [2024-11-20 18:06:24.816073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182b00 00:21:19.805 [2024-11-20 18:06:24.816082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.805 [2024-11-20 18:06:24.816092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182b00 00:21:19.805 [2024-11-20 18:06:24.816100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.805 [2024-11-20 18:06:24.816110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182b00 00:21:19.806 [2024-11-20 18:06:24.816294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.806 [2024-11-20 18:06:24.816743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.806 [2024-11-20 18:06:24.816750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.816989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.816998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.807 [2024-11-20 18:06:24.817388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.807 [2024-11-20 18:06:24.817397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.817985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.817993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.818002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.818009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.818018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.818025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.808 [2024-11-20 18:06:24.818035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.808 [2024-11-20 18:06:24.818042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.818051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:24.818059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.818068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:24.818075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.818085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:24.818093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.818103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:24.818110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.818119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:24.818126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.818135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:24.818142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.818152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:24.818159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.818168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:24.818176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.818185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:24.818193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.818298] rdma_provider_verbs.c: 97:spdk_rdma_provider_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:21:19.809 [2024-11-20 18:06:24.820519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.809 [2024-11-20 18:06:24.820529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.809 [2024-11-20 18:06:24.820537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12168 len:8 PRP1 0x0 PRP2 0x0 00:21:19.809 [2024-11-20 18:06:24.820545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:24.820579] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:21:19.809 [2024-11-20 18:06:24.820589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:19.809 [2024-11-20 18:06:24.824147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:19.809 [2024-11-20 18:06:24.844106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:21:19.809 [2024-11-20 18:06:24.895153] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:19.809 10572.67 IOPS, 41.30 MiB/s [2024-11-20T17:06:38.345Z] 12047.25 IOPS, 47.06 MiB/s [2024-11-20T17:06:38.345Z] 11395.60 IOPS, 44.51 MiB/s [2024-11-20T17:06:38.345Z] [2024-11-20 18:06:28.305185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.809 [2024-11-20 18:06:28.305225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.305236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.809 [2024-11-20 18:06:28.305253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.305262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.809 [2024-11-20 18:06:28.305269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.305277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.809 [2024-11-20 18:06:28.305285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.307517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:21:19.809 [2024-11-20 18:06:28.307532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:19.809 [2024-11-20 18:06:28.307542] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:21:19.809 [2024-11-20 18:06:28.307550] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] already in failed state 00:21:19.809 [2024-11-20 18:06:28.307567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:28.307576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.307622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:28.307631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.307667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:28.307676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.307712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:28.307720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.307755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180b00 00:21:19.809 [2024-11-20 18:06:28.307764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.307800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180b00 00:21:19.809 [2024-11-20 18:06:28.307808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.307845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180b00 00:21:19.809 [2024-11-20 18:06:28.307853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.307895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x180b00 00:21:19.809 [2024-11-20 18:06:28.307908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.307945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180b00 00:21:19.809 [2024-11-20 18:06:28.307953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.307990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180b00 00:21:19.809 [2024-11-20 18:06:28.307998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.308035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180b00 00:21:19.809 [2024-11-20 18:06:28.308043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.308080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180b00 00:21:19.809 [2024-11-20 18:06:28.308088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.308125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:28.308133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.308169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:28.308176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.308211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:28.308219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.308255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:28.308263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.308298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:28.308307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.809 [2024-11-20 18:06:28.308341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.809 [2024-11-20 18:06:28.308349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.308961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.308998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.810 [2024-11-20 18:06:28.309483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x180b00 00:21:19.810 [2024-11-20 18:06:28.309530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180b00 00:21:19.810 [2024-11-20 18:06:28.309574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x180b00 00:21:19.810 [2024-11-20 18:06:28.309618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180b00 00:21:19.810 [2024-11-20 18:06:28.309663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180b00 00:21:19.810 [2024-11-20 18:06:28.309708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180b00 00:21:19.810 [2024-11-20 18:06:28.309752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.810 [2024-11-20 18:06:28.309789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.309797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.309833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.309841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.309881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.811 [2024-11-20 18:06:28.309890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.309925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.811 [2024-11-20 18:06:28.309933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.309969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.811 [2024-11-20 18:06:28.309977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.811 [2024-11-20 18:06:28.310020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.310963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.310999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.311007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.311043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.311055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.311091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.311099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.311135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.311144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.311180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.311188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.311224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.311232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.311268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180b00 00:21:19.811 [2024-11-20 18:06:28.311276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.311311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.811 [2024-11-20 18:06:28.311320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.311355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.811 [2024-11-20 18:06:28.311363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.311399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.811 [2024-11-20 18:06:28.311407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.811 [2024-11-20 18:06:28.311443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.311451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.311495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.311539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.311584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.311628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.311671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.311715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.311760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.311805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.311848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.311896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.311940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.311975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.311984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.312203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.312247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.312293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.312339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.312384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.312429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.312473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180b00 00:21:19.812 [2024-11-20 18:06:28.312518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.312958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.312993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.313002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.313037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.313045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.313080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.812 [2024-11-20 18:06:28.313088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.812 [2024-11-20 18:06:28.313123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:28.313131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:28.313166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:28.313176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:28.332941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.813 [2024-11-20 18:06:28.332958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.813 [2024-11-20 18:06:28.332966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75264 len:8 PRP1 0x0 PRP2 0x0 00:21:19.813 [2024-11-20 18:06:28.332974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:28.333039] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Unable to perform failover, already in progress. 00:21:19.813 [2024-11-20 18:06:28.333069] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Unable to perform failover, already in progress. 00:21:19.813 [2024-11-20 18:06:28.336691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:19.813 [2024-11-20 18:06:28.391734] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:21:19.813 10492.33 IOPS, 40.99 MiB/s [2024-11-20T17:06:38.349Z] 11392.43 IOPS, 44.50 MiB/s [2024-11-20T17:06:38.349Z] 12052.12 IOPS, 47.08 MiB/s [2024-11-20T17:06:38.349Z] 12415.56 IOPS, 48.50 MiB/s [2024-11-20T17:06:38.349Z] [2024-11-20 18:06:32.682279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182b00 00:21:19.813 [2024-11-20 18:06:32.682631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.813 [2024-11-20 18:06:32.682764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.813 [2024-11-20 18:06:32.682771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.682926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.682985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.682992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.683008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.683026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.683042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.683059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.683076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.683095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.683111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.683128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.683145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.683162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.683179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.683196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.683213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.683231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.683247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.814 [2024-11-20 18:06:32.683264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.683283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.683300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.814 [2024-11-20 18:06:32.683310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182b00 00:21:19.814 [2024-11-20 18:06:32.683317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.815 [2024-11-20 18:06:32.683790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182b00 00:21:19.815 [2024-11-20 18:06:32.683949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.815 [2024-11-20 18:06:32.683959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.683966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.683976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.683983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.683992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182b00 00:21:19.816 [2024-11-20 18:06:32.684225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.684484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.816 [2024-11-20 18:06:32.684493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:22b99000 sqhd:7250 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.686937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.816 [2024-11-20 18:06:32.686949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.816 [2024-11-20 18:06:32.686956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:8 PRP1 0x0 PRP2 0x0 00:21:19.816 [2024-11-20 18:06:32.686964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.816 [2024-11-20 18:06:32.686999] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:21:19.816 [2024-11-20 18:06:32.687009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:19.816 [2024-11-20 18:06:32.690614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:19.816 [2024-11-20 18:06:32.710607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:21:19.816 [2024-11-20 18:06:32.753096] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:21:19.816 11188.80 IOPS, 43.71 MiB/s [2024-11-20T17:06:38.352Z] 11695.09 IOPS, 45.68 MiB/s [2024-11-20T17:06:38.352Z] 12107.17 IOPS, 47.29 MiB/s [2024-11-20T17:06:38.352Z] 12458.15 IOPS, 48.66 MiB/s [2024-11-20T17:06:38.352Z] 12773.57 IOPS, 49.90 MiB/s 00:21:19.816 Latency(us) 00:21:19.816 [2024-11-20T17:06:38.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.816 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:19.816 Verification LBA range: start 0x0 length 0x4000 00:21:19.817 NVMe0n1 : 15.01 13021.70 50.87 287.63 0.00 9586.97 348.16 1034594.99 00:21:19.817 [2024-11-20T17:06:38.353Z] =================================================================================================================== 00:21:19.817 [2024-11-20T17:06:38.353Z] Total : 13021.70 50.87 287.63 0.00 9586.97 348.16 1034594.99 00:21:19.817 Received shutdown signal, test time was about 15.000000 seconds 00:21:19.817 00:21:19.817 Latency(us) 00:21:19.817 [2024-11-20T17:06:38.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.817 [2024-11-20T17:06:38.353Z] =================================================================================================================== 00:21:19.817 [2024-11-20T17:06:38.353Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3632562 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3632562 /var/tmp/bdevperf.sock 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3632562 ']' 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.817 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:20.386 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.386 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:20.386 18:06:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:20.647 [2024-11-20 18:06:39.025731] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:20.647 18:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:20.907 [2024-11-20 18:06:39.210282] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:20.907 18:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:21.168 NVMe0n1 00:21:21.168 18:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:21.430 00:21:21.431 18:06:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:21.691 00:21:21.691 18:06:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:21.691 18:06:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:21.691 18:06:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:21.950 18:06:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:25.247 18:06:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:25.247 18:06:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:25.247 18:06:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3633577 00:21:25.247 18:06:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:25.247 18:06:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3633577 00:21:26.189 { 00:21:26.189 "results": [ 00:21:26.189 { 00:21:26.189 "job": "NVMe0n1", 00:21:26.189 "core_mask": "0x1", 00:21:26.189 "workload": "verify", 00:21:26.189 "status": "finished", 00:21:26.189 "verify_range": { 00:21:26.189 "start": 0, 00:21:26.189 "length": 16384 00:21:26.189 }, 00:21:26.189 "queue_depth": 128, 00:21:26.189 "io_size": 4096, 00:21:26.189 "runtime": 1.009892, 00:21:26.189 "iops": 16603.755649118917, 00:21:26.189 "mibps": 64.85842050437077, 00:21:26.189 "io_failed": 0, 00:21:26.189 "io_timeout": 0, 00:21:26.189 "avg_latency_us": 7661.678982188294, 00:21:26.189 "min_latency_us": 2048.0, 00:21:26.189 "max_latency_us": 13981.013333333334 00:21:26.189 } 00:21:26.189 ], 00:21:26.189 "core_count": 1 00:21:26.189 } 00:21:26.189 18:06:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:26.189 [2024-11-20 18:06:38.068182] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:21:26.189 [2024-11-20 18:06:38.068240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632562 ] 00:21:26.189 [2024-11-20 18:06:38.146157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.189 [2024-11-20 18:06:38.181640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.189 [2024-11-20 18:06:40.331303] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:21:26.189 [2024-11-20 18:06:40.332061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:21:26.189 [2024-11-20 18:06:40.332095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:21:26.189 [2024-11-20 18:06:40.359553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:21:26.189 [2024-11-20 18:06:40.385647] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:21:26.189 Running I/O for 1 seconds... 00:21:26.189 16585.00 IOPS, 64.79 MiB/s 00:21:26.189 Latency(us) 00:21:26.189 [2024-11-20T17:06:44.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.189 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:26.189 Verification LBA range: start 0x0 length 0x4000 00:21:26.189 NVMe0n1 : 1.01 16603.76 64.86 0.00 0.00 7661.68 2048.00 13981.01 00:21:26.189 [2024-11-20T17:06:44.725Z] =================================================================================================================== 00:21:26.189 [2024-11-20T17:06:44.725Z] Total : 16603.76 64.86 0.00 0.00 7661.68 2048.00 13981.01 00:21:26.189 18:06:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.189 18:06:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:26.450 18:06:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.710 18:06:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.710 18:06:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:26.710 18:06:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.969 18:06:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3632562 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3632562 ']' 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3632562 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3632562 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3632562' 00:21:30.268 killing process with pid 3632562 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3632562 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3632562 00:21:30.268 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.529 18:06:48 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:30.529 rmmod nvme_rdma 00:21:30.529 rmmod nvme_fabrics 00:21:30.529 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.529 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:21:30.529 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:21:30.529 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3628844 ']' 00:21:30.529 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3628844 00:21:30.529 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3628844 ']' 00:21:30.529 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3628844 00:21:30.529 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:30.529 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.529 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3628844 00:21:30.789 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.789 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.789 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3628844' 00:21:30.789 killing process with pid 3628844 00:21:30.789 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3628844 00:21:30.789 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3628844 00:21:30.789 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.789 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:30.789 00:21:30.789 real 0m38.971s 00:21:30.789 user 2m4.402s 00:21:30.789 sys 0m8.216s 00:21:30.789 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.789 18:06:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:30.789 ************************************ 00:21:30.789 END TEST nvmf_failover 00:21:30.789 ************************************ 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.050 ************************************ 00:21:31.050 START TEST nvmf_host_discovery 00:21:31.050 ************************************ 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:21:31.050 * Looking for test storage... 00:21:31.050 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.050 --rc genhtml_branch_coverage=1 00:21:31.050 --rc genhtml_function_coverage=1 00:21:31.050 --rc genhtml_legend=1 00:21:31.050 --rc geninfo_all_blocks=1 00:21:31.050 --rc geninfo_unexecuted_blocks=1 00:21:31.050 00:21:31.050 ' 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.050 --rc genhtml_branch_coverage=1 00:21:31.050 --rc genhtml_function_coverage=1 00:21:31.050 --rc genhtml_legend=1 00:21:31.050 --rc geninfo_all_blocks=1 00:21:31.050 --rc geninfo_unexecuted_blocks=1 00:21:31.050 00:21:31.050 ' 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.050 --rc genhtml_branch_coverage=1 00:21:31.050 --rc genhtml_function_coverage=1 00:21:31.050 --rc genhtml_legend=1 00:21:31.050 --rc geninfo_all_blocks=1 00:21:31.050 --rc geninfo_unexecuted_blocks=1 00:21:31.050 00:21:31.050 ' 00:21:31.050 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.050 --rc genhtml_branch_coverage=1 00:21:31.050 --rc genhtml_function_coverage=1 00:21:31.050 --rc genhtml_legend=1 00:21:31.050 --rc geninfo_all_blocks=1 00:21:31.050 --rc geninfo_unexecuted_blocks=1 00:21:31.050 00:21:31.050 ' 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.051 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.312 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:31.312 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:21:31.312 00:21:31.312 real 0m0.231s 00:21:31.312 user 0m0.125s 00:21:31.312 sys 0m0.118s 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.312 ************************************ 00:21:31.312 END TEST nvmf_host_discovery 00:21:31.312 ************************************ 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.312 ************************************ 00:21:31.312 START TEST nvmf_host_multipath_status 00:21:31.312 ************************************ 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:21:31.312 * Looking for test storage... 00:21:31.312 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:21:31.312 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.575 --rc genhtml_branch_coverage=1 00:21:31.575 --rc genhtml_function_coverage=1 00:21:31.575 --rc genhtml_legend=1 00:21:31.575 --rc geninfo_all_blocks=1 00:21:31.575 --rc geninfo_unexecuted_blocks=1 00:21:31.575 00:21:31.575 ' 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.575 --rc genhtml_branch_coverage=1 00:21:31.575 --rc genhtml_function_coverage=1 00:21:31.575 --rc genhtml_legend=1 00:21:31.575 --rc geninfo_all_blocks=1 00:21:31.575 --rc geninfo_unexecuted_blocks=1 00:21:31.575 00:21:31.575 ' 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.575 --rc genhtml_branch_coverage=1 00:21:31.575 --rc genhtml_function_coverage=1 00:21:31.575 --rc genhtml_legend=1 00:21:31.575 --rc geninfo_all_blocks=1 00:21:31.575 --rc geninfo_unexecuted_blocks=1 00:21:31.575 00:21:31.575 ' 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.575 --rc genhtml_branch_coverage=1 00:21:31.575 --rc genhtml_function_coverage=1 00:21:31.575 --rc genhtml_legend=1 00:21:31.575 --rc geninfo_all_blocks=1 00:21:31.575 --rc geninfo_unexecuted_blocks=1 00:21:31.575 00:21:31.575 ' 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.575 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.576 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.576 18:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:39.710 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:39.710 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:39.711 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:39.711 Found net devices under 0000:98:00.0: mlx_0_0 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:39.711 Found net devices under 0000:98:00.1: mlx_0_1 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:39.711 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:39.711 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:39.711 altname enp152s0f0np0 00:21:39.711 altname ens817f0np0 00:21:39.711 inet 192.168.100.8/24 scope global mlx_0_0 00:21:39.711 valid_lft forever preferred_lft forever 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:39.711 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:39.711 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:39.711 altname enp152s0f1np1 00:21:39.711 altname ens817f1np1 00:21:39.711 inet 192.168.100.9/24 scope global mlx_0_1 00:21:39.711 valid_lft forever preferred_lft forever 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:39.711 18:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:39.711 192.168.100.9' 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:39.711 192.168.100.9' 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:39.711 192.168.100.9' 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3638998 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3638998 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3638998 ']' 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.711 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:39.711 [2024-11-20 18:06:58.161467] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:21:39.711 [2024-11-20 18:06:58.161530] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.711 [2024-11-20 18:06:58.240584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:39.975 [2024-11-20 18:06:58.276314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.975 [2024-11-20 18:06:58.276346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.976 [2024-11-20 18:06:58.276354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.976 [2024-11-20 18:06:58.276361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.976 [2024-11-20 18:06:58.276366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.976 [2024-11-20 18:06:58.277644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.976 [2024-11-20 18:06:58.277646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.546 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.546 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:21:40.546 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.546 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.546 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:40.546 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.546 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3638998 00:21:40.546 18:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:40.813 [2024-11-20 18:06:59.169795] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17e67a0/0x17eac90) succeed. 00:21:40.814 [2024-11-20 18:06:59.182902] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17e7cf0/0x182c330) succeed. 00:21:40.814 18:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:41.081 Malloc0 00:21:41.081 18:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:41.341 18:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.341 18:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:41.601 [2024-11-20 18:06:59.936643] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:41.601 18:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:41.601 [2024-11-20 18:07:00.128923] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3639383 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3639383 /var/tmp/bdevperf.sock 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3639383 ']' 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:21:41.862 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:42.123 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:42.383 Nvme0n1 00:21:42.383 18:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:42.644 Nvme0n1 00:21:42.644 18:07:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:42.644 18:07:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:45.188 18:07:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:45.189 18:07:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:21:45.189 18:07:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:45.189 18:07:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:46.131 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:46.131 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:46.131 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.131 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:46.391 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.391 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:46.392 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.392 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:46.392 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:46.392 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:46.392 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.392 18:07:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:46.652 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.652 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:46.652 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.652 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:46.913 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.913 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:46.913 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.913 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:46.913 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.913 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:46.913 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.913 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:47.173 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.173 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:47.174 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:47.434 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:47.434 18:07:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:48.817 18:07:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:48.817 18:07:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:48.817 18:07:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.817 18:07:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:48.817 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:48.817 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:48.817 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.817 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:48.817 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.817 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:48.817 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.817 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:49.078 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.078 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:49.078 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.078 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:49.338 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.338 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:49.338 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.338 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:49.338 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.338 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:49.338 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.339 18:07:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:49.599 18:07:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.599 18:07:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:49.599 18:07:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:49.858 18:07:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:21:50.119 18:07:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:51.119 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:51.119 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:51.119 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:51.119 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.119 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.119 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:51.119 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.119 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:51.407 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:51.407 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:51.407 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.407 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:51.668 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.668 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:51.668 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.668 18:07:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:51.668 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.668 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:51.668 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.668 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:51.929 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.929 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:51.929 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.929 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:52.190 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.190 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:52.190 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:52.190 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:52.450 18:07:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:53.391 18:07:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:53.391 18:07:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:53.391 18:07:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.391 18:07:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:53.651 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.651 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:53.651 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.651 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:53.913 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:53.913 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:53.913 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.913 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:53.913 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.913 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:53.913 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.913 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:54.174 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.174 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:54.174 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.174 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:54.436 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.436 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:54.436 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.436 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:54.436 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:54.436 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:54.436 18:07:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:21:54.696 18:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:54.956 18:07:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:55.897 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:55.897 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:55.897 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.897 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:55.897 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:55.897 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:56.158 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.158 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:56.158 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:56.158 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:56.158 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.158 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:56.418 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.418 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:56.419 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.419 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:56.679 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.679 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:56.679 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.679 18:07:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:56.679 18:07:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:56.679 18:07:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:56.679 18:07:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.679 18:07:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:56.939 18:07:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:56.939 18:07:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:56.939 18:07:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:21:57.200 18:07:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:57.200 18:07:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:58.586 18:07:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:58.586 18:07:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:58.586 18:07:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:58.586 18:07:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.586 18:07:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:58.586 18:07:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:58.586 18:07:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.586 18:07:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:58.586 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.586 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:58.586 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.586 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:58.846 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.846 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:58.846 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.846 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:59.107 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.107 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:59.107 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:59.107 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.107 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:59.107 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:59.107 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.107 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:59.368 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.368 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:59.628 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:59.628 18:07:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:21:59.628 18:07:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:59.888 18:07:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:00.825 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:00.825 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:00.825 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.825 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:01.084 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.084 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:01.084 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:01.084 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.342 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.342 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:01.342 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.342 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:01.342 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.343 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:01.343 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.343 18:07:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:01.602 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.602 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:01.602 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.602 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:01.862 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.862 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:01.862 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.862 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:01.862 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.862 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:01.862 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:02.123 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:02.384 18:07:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:03.323 18:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:03.323 18:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:03.323 18:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.323 18:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:03.582 18:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:03.582 18:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:03.582 18:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.582 18:07:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:03.843 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.843 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:03.843 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.843 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:03.843 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.843 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:03.843 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.843 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:04.105 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.105 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:04.105 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.105 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:04.366 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.366 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:04.366 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.366 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:04.366 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.366 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:04.366 18:07:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:04.627 18:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:04.887 18:07:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:05.827 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:05.827 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:05.827 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.827 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:06.087 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.087 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:06.087 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.087 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:06.087 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.087 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:06.087 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.087 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:06.347 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.347 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:06.347 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.347 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:06.609 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.609 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:06.609 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:06.609 18:07:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.871 18:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.871 18:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:06.871 18:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.871 18:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:06.871 18:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.871 18:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:06.871 18:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:07.131 18:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:07.390 18:07:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:08.329 18:07:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:08.329 18:07:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:08.329 18:07:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.329 18:07:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:08.589 18:07:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.589 18:07:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:08.589 18:07:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.589 18:07:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:08.589 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.589 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:08.589 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.589 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.849 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.849 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:08.849 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.849 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:09.109 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.109 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:09.109 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.109 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:09.109 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.109 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:09.109 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3639383 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3639383 ']' 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3639383 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3639383 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3639383' 00:22:09.369 killing process with pid 3639383 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3639383 00:22:09.369 18:07:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3639383 00:22:09.369 { 00:22:09.369 "results": [ 00:22:09.369 { 00:22:09.369 "job": "Nvme0n1", 00:22:09.369 "core_mask": "0x4", 00:22:09.369 "workload": "verify", 00:22:09.369 "status": "terminated", 00:22:09.369 "verify_range": { 00:22:09.369 "start": 0, 00:22:09.369 "length": 16384 00:22:09.369 }, 00:22:09.369 "queue_depth": 128, 00:22:09.369 "io_size": 4096, 00:22:09.369 "runtime": 26.640066, 00:22:09.369 "iops": 15570.682144706398, 00:22:09.369 "mibps": 60.822977127759366, 00:22:09.369 "io_failed": 0, 00:22:09.369 "io_timeout": 0, 00:22:09.369 "avg_latency_us": 8201.200408096354, 00:22:09.369 "min_latency_us": 989.8666666666667, 00:22:09.369 "max_latency_us": 3019898.88 00:22:09.369 } 00:22:09.369 ], 00:22:09.369 "core_count": 1 00:22:09.369 } 00:22:09.633 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3639383 00:22:09.633 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:09.633 [2024-11-20 18:07:00.210000] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:22:09.633 [2024-11-20 18:07:00.210055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639383 ] 00:22:09.633 [2024-11-20 18:07:00.273298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.633 [2024-11-20 18:07:00.302153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.633 Running I/O for 90 seconds... 00:22:09.633 14313.00 IOPS, 55.91 MiB/s [2024-11-20T17:07:28.169Z] 14400.00 IOPS, 56.25 MiB/s [2024-11-20T17:07:28.169Z] 14422.67 IOPS, 56.34 MiB/s [2024-11-20T17:07:28.169Z] 14456.50 IOPS, 56.47 MiB/s [2024-11-20T17:07:28.169Z] 15067.20 IOPS, 58.86 MiB/s [2024-11-20T17:07:28.169Z] 16112.00 IOPS, 62.94 MiB/s [2024-11-20T17:07:28.169Z] 16859.29 IOPS, 65.86 MiB/s [2024-11-20T17:07:28.169Z] 16725.88 IOPS, 65.34 MiB/s [2024-11-20T17:07:28.169Z] 16477.67 IOPS, 64.37 MiB/s [2024-11-20T17:07:28.169Z] 16279.30 IOPS, 63.59 MiB/s [2024-11-20T17:07:28.169Z] 16117.09 IOPS, 62.96 MiB/s [2024-11-20T17:07:28.169Z] [2024-11-20 18:07:13.055253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181100 00:22:09.633 [2024-11-20 18:07:13.055465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:09.633 [2024-11-20 18:07:13.055473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.634 [2024-11-20 18:07:13.055585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.634 [2024-11-20 18:07:13.055598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.634 [2024-11-20 18:07:13.055611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.055992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.055997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:09.634 [2024-11-20 18:07:13.056227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181100 00:22:09.634 [2024-11-20 18:07:13.056232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.056246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.056260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.635 [2024-11-20 18:07:13.056919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.056934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.056950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.056967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.056983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.056995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:09.635 [2024-11-20 18:07:13.057338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x181100 00:22:09.635 [2024-11-20 18:07:13.057344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:13.057590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:13.057792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:13.057798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:09.636 15790.33 IOPS, 61.68 MiB/s [2024-11-20T17:07:28.172Z] 14575.69 IOPS, 56.94 MiB/s [2024-11-20T17:07:28.172Z] 13534.57 IOPS, 52.87 MiB/s [2024-11-20T17:07:28.172Z] 12855.20 IOPS, 50.22 MiB/s [2024-11-20T17:07:28.172Z] 13385.94 IOPS, 52.29 MiB/s [2024-11-20T17:07:28.172Z] 13839.47 IOPS, 54.06 MiB/s [2024-11-20T17:07:28.172Z] 14137.22 IOPS, 55.22 MiB/s [2024-11-20T17:07:28.172Z] 14397.47 IOPS, 56.24 MiB/s [2024-11-20T17:07:28.172Z] 14707.00 IOPS, 57.45 MiB/s [2024-11-20T17:07:28.172Z] 15023.86 IOPS, 58.69 MiB/s [2024-11-20T17:07:28.172Z] 15308.55 IOPS, 59.80 MiB/s [2024-11-20T17:07:28.172Z] 15465.57 IOPS, 60.41 MiB/s [2024-11-20T17:07:28.172Z] 15609.83 IOPS, 60.98 MiB/s [2024-11-20T17:07:28.172Z] [2024-11-20 18:07:25.672328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:25.672361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:25.672814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:25.672827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:25.672886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x181100 00:22:09.636 [2024-11-20 18:07:25.672923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.636 [2024-11-20 18:07:25.672974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:09.636 [2024-11-20 18:07:25.672981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.672987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.672996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.637 [2024-11-20 18:07:25.673435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:09.637 [2024-11-20 18:07:25.673507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181100 00:22:09.637 [2024-11-20 18:07:25.673512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:09.638 [2024-11-20 18:07:25.673519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181100 00:22:09.638 [2024-11-20 18:07:25.673524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:09.638 [2024-11-20 18:07:25.673532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.638 [2024-11-20 18:07:25.673537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:09.638 [2024-11-20 18:07:25.673545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.638 [2024-11-20 18:07:25.673550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:09.638 [2024-11-20 18:07:25.673557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.638 [2024-11-20 18:07:25.673561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:09.638 [2024-11-20 18:07:25.673569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.638 [2024-11-20 18:07:25.673574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:09.638 15640.48 IOPS, 61.10 MiB/s [2024-11-20T17:07:28.174Z] 15597.46 IOPS, 60.93 MiB/s [2024-11-20T17:07:28.174Z] Received shutdown signal, test time was about 26.640679 seconds 00:22:09.638 00:22:09.638 Latency(us) 00:22:09.638 [2024-11-20T17:07:28.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.638 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:09.638 Verification LBA range: start 0x0 length 0x4000 00:22:09.638 Nvme0n1 : 26.64 15570.68 60.82 0.00 0.00 8201.20 989.87 3019898.88 00:22:09.638 [2024-11-20T17:07:28.174Z] =================================================================================================================== 00:22:09.638 [2024-11-20T17:07:28.174Z] Total : 15570.68 60.82 0.00 0.00 8201.20 989.87 3019898.88 00:22:09.638 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:09.897 rmmod nvme_rdma 00:22:09.897 rmmod nvme_fabrics 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3638998 ']' 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3638998 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3638998 ']' 00:22:09.897 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3638998 00:22:09.898 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:22:09.898 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.898 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3638998 00:22:09.898 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.898 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.898 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3638998' 00:22:09.898 killing process with pid 3638998 00:22:09.898 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3638998 00:22:09.898 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3638998 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:10.158 00:22:10.158 real 0m38.823s 00:22:10.158 user 1m46.021s 00:22:10.158 sys 0m9.287s 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:10.158 ************************************ 00:22:10.158 END TEST nvmf_host_multipath_status 00:22:10.158 ************************************ 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.158 ************************************ 00:22:10.158 START TEST nvmf_discovery_remove_ifc 00:22:10.158 ************************************ 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:10.158 * Looking for test storage... 00:22:10.158 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:22:10.158 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:10.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.419 --rc genhtml_branch_coverage=1 00:22:10.419 --rc genhtml_function_coverage=1 00:22:10.419 --rc genhtml_legend=1 00:22:10.419 --rc geninfo_all_blocks=1 00:22:10.419 --rc geninfo_unexecuted_blocks=1 00:22:10.419 00:22:10.419 ' 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:10.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.419 --rc genhtml_branch_coverage=1 00:22:10.419 --rc genhtml_function_coverage=1 00:22:10.419 --rc genhtml_legend=1 00:22:10.419 --rc geninfo_all_blocks=1 00:22:10.419 --rc geninfo_unexecuted_blocks=1 00:22:10.419 00:22:10.419 ' 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:10.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.419 --rc genhtml_branch_coverage=1 00:22:10.419 --rc genhtml_function_coverage=1 00:22:10.419 --rc genhtml_legend=1 00:22:10.419 --rc geninfo_all_blocks=1 00:22:10.419 --rc geninfo_unexecuted_blocks=1 00:22:10.419 00:22:10.419 ' 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:10.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.419 --rc genhtml_branch_coverage=1 00:22:10.419 --rc genhtml_function_coverage=1 00:22:10.419 --rc genhtml_legend=1 00:22:10.419 --rc geninfo_all_blocks=1 00:22:10.419 --rc geninfo_unexecuted_blocks=1 00:22:10.419 00:22:10.419 ' 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.419 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:10.420 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:10.420 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:22:10.420 00:22:10.420 real 0m0.234s 00:22:10.420 user 0m0.136s 00:22:10.420 sys 0m0.113s 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:10.420 ************************************ 00:22:10.420 END TEST nvmf_discovery_remove_ifc 00:22:10.420 ************************************ 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.420 ************************************ 00:22:10.420 START TEST nvmf_identify_kernel_target 00:22:10.420 ************************************ 00:22:10.420 18:07:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:10.682 * Looking for test storage... 00:22:10.682 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:10.682 18:07:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:10.682 18:07:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:22:10.682 18:07:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:10.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.682 --rc genhtml_branch_coverage=1 00:22:10.682 --rc genhtml_function_coverage=1 00:22:10.682 --rc genhtml_legend=1 00:22:10.682 --rc geninfo_all_blocks=1 00:22:10.682 --rc geninfo_unexecuted_blocks=1 00:22:10.682 00:22:10.682 ' 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:10.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.682 --rc genhtml_branch_coverage=1 00:22:10.682 --rc genhtml_function_coverage=1 00:22:10.682 --rc genhtml_legend=1 00:22:10.682 --rc geninfo_all_blocks=1 00:22:10.682 --rc geninfo_unexecuted_blocks=1 00:22:10.682 00:22:10.682 ' 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:10.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.682 --rc genhtml_branch_coverage=1 00:22:10.682 --rc genhtml_function_coverage=1 00:22:10.682 --rc genhtml_legend=1 00:22:10.682 --rc geninfo_all_blocks=1 00:22:10.682 --rc geninfo_unexecuted_blocks=1 00:22:10.682 00:22:10.682 ' 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:10.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.682 --rc genhtml_branch_coverage=1 00:22:10.682 --rc genhtml_function_coverage=1 00:22:10.682 --rc genhtml_legend=1 00:22:10.682 --rc geninfo_all_blocks=1 00:22:10.682 --rc geninfo_unexecuted_blocks=1 00:22:10.682 00:22:10.682 ' 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.682 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:10.683 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.683 18:07:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:18.830 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:18.830 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:18.830 Found net devices under 0000:98:00.0: mlx_0_0 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:18.830 Found net devices under 0000:98:00.1: mlx_0_1 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:18.830 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:18.831 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:18.831 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:18.831 altname enp152s0f0np0 00:22:18.831 altname ens817f0np0 00:22:18.831 inet 192.168.100.8/24 scope global mlx_0_0 00:22:18.831 valid_lft forever preferred_lft forever 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:18.831 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:18.831 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:18.831 altname enp152s0f1np1 00:22:18.831 altname ens817f1np1 00:22:18.831 inet 192.168.100.9/24 scope global mlx_0_1 00:22:18.831 valid_lft forever preferred_lft forever 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:18.831 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:18.832 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:18.832 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:18.832 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:18.832 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:18.832 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:18.832 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:18.832 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:19.093 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:19.093 192.168.100.9' 00:22:19.093 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:19.093 192.168.100.9' 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:19.094 192.168.100.9' 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:19.094 18:07:37 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:22.399 Waiting for block devices as requested 00:22:22.399 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:22:22.660 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:22:22.660 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:22:22.660 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:22:22.921 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:22:22.921 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:22:22.921 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:22:23.182 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:22:23.182 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:22:23.444 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:22:23.444 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:22:23.444 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:22:23.444 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:22:23.704 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:22:23.704 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:22:23.704 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:22:23.704 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:22:23.966 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:23.966 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:23.966 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:23.966 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:23.966 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:23.966 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:23.966 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:23.966 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:23.966 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:24.227 No valid GPT data, bailing 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:24.227 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -t rdma -s 4420 00:22:24.488 00:22:24.488 Discovery Log Number of Records 2, Generation counter 2 00:22:24.488 =====Discovery Log Entry 0====== 00:22:24.488 trtype: rdma 00:22:24.488 adrfam: ipv4 00:22:24.488 subtype: current discovery subsystem 00:22:24.488 treq: not specified, sq flow control disable supported 00:22:24.488 portid: 1 00:22:24.488 trsvcid: 4420 00:22:24.488 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:24.488 traddr: 192.168.100.8 00:22:24.488 eflags: none 00:22:24.488 rdma_prtype: not specified 00:22:24.488 rdma_qptype: connected 00:22:24.488 rdma_cms: rdma-cm 00:22:24.488 rdma_pkey: 0x0000 00:22:24.488 =====Discovery Log Entry 1====== 00:22:24.488 trtype: rdma 00:22:24.488 adrfam: ipv4 00:22:24.488 subtype: nvme subsystem 00:22:24.488 treq: not specified, sq flow control disable supported 00:22:24.488 portid: 1 00:22:24.488 trsvcid: 4420 00:22:24.488 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:24.488 traddr: 192.168.100.8 00:22:24.488 eflags: none 00:22:24.488 rdma_prtype: not specified 00:22:24.488 rdma_qptype: connected 00:22:24.488 rdma_cms: rdma-cm 00:22:24.488 rdma_pkey: 0x0000 00:22:24.488 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:22:24.488 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:24.488 ===================================================== 00:22:24.488 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:24.488 ===================================================== 00:22:24.488 Controller Capabilities/Features 00:22:24.488 ================================ 00:22:24.488 Vendor ID: 0000 00:22:24.488 Subsystem Vendor ID: 0000 00:22:24.488 Serial Number: 0bbe441a90054e8971bb 00:22:24.488 Model Number: Linux 00:22:24.488 Firmware Version: 6.8.9-20 00:22:24.488 Recommended Arb Burst: 0 00:22:24.488 IEEE OUI Identifier: 00 00 00 00:22:24.488 Multi-path I/O 00:22:24.488 May have multiple subsystem ports: No 00:22:24.488 May have multiple controllers: No 00:22:24.488 Associated with SR-IOV VF: No 00:22:24.488 Max Data Transfer Size: Unlimited 00:22:24.488 Max Number of Namespaces: 0 00:22:24.488 Max Number of I/O Queues: 1024 00:22:24.488 NVMe Specification Version (VS): 1.3 00:22:24.488 NVMe Specification Version (Identify): 1.3 00:22:24.488 Maximum Queue Entries: 128 00:22:24.488 Contiguous Queues Required: No 00:22:24.488 Arbitration Mechanisms Supported 00:22:24.488 Weighted Round Robin: Not Supported 00:22:24.488 Vendor Specific: Not Supported 00:22:24.488 Reset Timeout: 7500 ms 00:22:24.488 Doorbell Stride: 4 bytes 00:22:24.488 NVM Subsystem Reset: Not Supported 00:22:24.488 Command Sets Supported 00:22:24.488 NVM Command Set: Supported 00:22:24.488 Boot Partition: Not Supported 00:22:24.488 Memory Page Size Minimum: 4096 bytes 00:22:24.488 Memory Page Size Maximum: 4096 bytes 00:22:24.488 Persistent Memory Region: Not Supported 00:22:24.488 Optional Asynchronous Events Supported 00:22:24.488 Namespace Attribute Notices: Not Supported 00:22:24.488 Firmware Activation Notices: Not Supported 00:22:24.488 ANA Change Notices: Not Supported 00:22:24.488 PLE Aggregate Log Change Notices: Not Supported 00:22:24.488 LBA Status Info Alert Notices: Not Supported 00:22:24.488 EGE Aggregate Log Change Notices: Not Supported 00:22:24.488 Normal NVM Subsystem Shutdown event: Not Supported 00:22:24.488 Zone Descriptor Change Notices: Not Supported 00:22:24.488 Discovery Log Change Notices: Supported 00:22:24.488 Controller Attributes 00:22:24.488 128-bit Host Identifier: Not Supported 00:22:24.488 Non-Operational Permissive Mode: Not Supported 00:22:24.488 NVM Sets: Not Supported 00:22:24.488 Read Recovery Levels: Not Supported 00:22:24.488 Endurance Groups: Not Supported 00:22:24.488 Predictable Latency Mode: Not Supported 00:22:24.488 Traffic Based Keep ALive: Not Supported 00:22:24.488 Namespace Granularity: Not Supported 00:22:24.488 SQ Associations: Not Supported 00:22:24.488 UUID List: Not Supported 00:22:24.488 Multi-Domain Subsystem: Not Supported 00:22:24.488 Fixed Capacity Management: Not Supported 00:22:24.488 Variable Capacity Management: Not Supported 00:22:24.488 Delete Endurance Group: Not Supported 00:22:24.488 Delete NVM Set: Not Supported 00:22:24.488 Extended LBA Formats Supported: Not Supported 00:22:24.488 Flexible Data Placement Supported: Not Supported 00:22:24.488 00:22:24.488 Controller Memory Buffer Support 00:22:24.488 ================================ 00:22:24.488 Supported: No 00:22:24.488 00:22:24.488 Persistent Memory Region Support 00:22:24.488 ================================ 00:22:24.488 Supported: No 00:22:24.488 00:22:24.488 Admin Command Set Attributes 00:22:24.488 ============================ 00:22:24.488 Security Send/Receive: Not Supported 00:22:24.488 Format NVM: Not Supported 00:22:24.488 Firmware Activate/Download: Not Supported 00:22:24.488 Namespace Management: Not Supported 00:22:24.488 Device Self-Test: Not Supported 00:22:24.488 Directives: Not Supported 00:22:24.488 NVMe-MI: Not Supported 00:22:24.488 Virtualization Management: Not Supported 00:22:24.488 Doorbell Buffer Config: Not Supported 00:22:24.488 Get LBA Status Capability: Not Supported 00:22:24.488 Command & Feature Lockdown Capability: Not Supported 00:22:24.488 Abort Command Limit: 1 00:22:24.488 Async Event Request Limit: 1 00:22:24.488 Number of Firmware Slots: N/A 00:22:24.488 Firmware Slot 1 Read-Only: N/A 00:22:24.488 Firmware Activation Without Reset: N/A 00:22:24.488 Multiple Update Detection Support: N/A 00:22:24.488 Firmware Update Granularity: No Information Provided 00:22:24.488 Per-Namespace SMART Log: No 00:22:24.488 Asymmetric Namespace Access Log Page: Not Supported 00:22:24.488 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:24.488 Command Effects Log Page: Not Supported 00:22:24.488 Get Log Page Extended Data: Supported 00:22:24.488 Telemetry Log Pages: Not Supported 00:22:24.488 Persistent Event Log Pages: Not Supported 00:22:24.488 Supported Log Pages Log Page: May Support 00:22:24.488 Commands Supported & Effects Log Page: Not Supported 00:22:24.488 Feature Identifiers & Effects Log Page:May Support 00:22:24.488 NVMe-MI Commands & Effects Log Page: May Support 00:22:24.488 Data Area 4 for Telemetry Log: Not Supported 00:22:24.488 Error Log Page Entries Supported: 1 00:22:24.488 Keep Alive: Not Supported 00:22:24.488 00:22:24.488 NVM Command Set Attributes 00:22:24.488 ========================== 00:22:24.488 Submission Queue Entry Size 00:22:24.488 Max: 1 00:22:24.488 Min: 1 00:22:24.488 Completion Queue Entry Size 00:22:24.488 Max: 1 00:22:24.488 Min: 1 00:22:24.488 Number of Namespaces: 0 00:22:24.488 Compare Command: Not Supported 00:22:24.488 Write Uncorrectable Command: Not Supported 00:22:24.488 Dataset Management Command: Not Supported 00:22:24.488 Write Zeroes Command: Not Supported 00:22:24.488 Set Features Save Field: Not Supported 00:22:24.488 Reservations: Not Supported 00:22:24.488 Timestamp: Not Supported 00:22:24.488 Copy: Not Supported 00:22:24.488 Volatile Write Cache: Not Present 00:22:24.488 Atomic Write Unit (Normal): 1 00:22:24.488 Atomic Write Unit (PFail): 1 00:22:24.488 Atomic Compare & Write Unit: 1 00:22:24.489 Fused Compare & Write: Not Supported 00:22:24.489 Scatter-Gather List 00:22:24.489 SGL Command Set: Supported 00:22:24.489 SGL Keyed: Supported 00:22:24.489 SGL Bit Bucket Descriptor: Not Supported 00:22:24.489 SGL Metadata Pointer: Not Supported 00:22:24.489 Oversized SGL: Not Supported 00:22:24.489 SGL Metadata Address: Not Supported 00:22:24.489 SGL Offset: Supported 00:22:24.489 Transport SGL Data Block: Not Supported 00:22:24.489 Replay Protected Memory Block: Not Supported 00:22:24.489 00:22:24.489 Firmware Slot Information 00:22:24.489 ========================= 00:22:24.489 Active slot: 0 00:22:24.489 00:22:24.489 00:22:24.489 Error Log 00:22:24.489 ========= 00:22:24.489 00:22:24.489 Active Namespaces 00:22:24.489 ================= 00:22:24.489 Discovery Log Page 00:22:24.489 ================== 00:22:24.489 Generation Counter: 2 00:22:24.489 Number of Records: 2 00:22:24.489 Record Format: 0 00:22:24.489 00:22:24.489 Discovery Log Entry 0 00:22:24.489 ---------------------- 00:22:24.489 Transport Type: 1 (RDMA) 00:22:24.489 Address Family: 1 (IPv4) 00:22:24.489 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:24.489 Entry Flags: 00:22:24.489 Duplicate Returned Information: 0 00:22:24.489 Explicit Persistent Connection Support for Discovery: 0 00:22:24.489 Transport Requirements: 00:22:24.489 Secure Channel: Not Specified 00:22:24.489 Port ID: 1 (0x0001) 00:22:24.489 Controller ID: 65535 (0xffff) 00:22:24.489 Admin Max SQ Size: 32 00:22:24.489 Transport Service Identifier: 4420 00:22:24.489 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:24.489 Transport Address: 192.168.100.8 00:22:24.489 Transport Specific Address Subtype - RDMA 00:22:24.489 RDMA QP Service Type: 1 (Reliable Connected) 00:22:24.489 RDMA Provider Type: 1 (No provider specified) 00:22:24.489 RDMA CM Service: 1 (RDMA_CM) 00:22:24.489 Discovery Log Entry 1 00:22:24.489 ---------------------- 00:22:24.489 Transport Type: 1 (RDMA) 00:22:24.489 Address Family: 1 (IPv4) 00:22:24.489 Subsystem Type: 2 (NVM Subsystem) 00:22:24.489 Entry Flags: 00:22:24.489 Duplicate Returned Information: 0 00:22:24.489 Explicit Persistent Connection Support for Discovery: 0 00:22:24.489 Transport Requirements: 00:22:24.489 Secure Channel: Not Specified 00:22:24.489 Port ID: 1 (0x0001) 00:22:24.489 Controller ID: 65535 (0xffff) 00:22:24.489 Admin Max SQ Size: 32 00:22:24.489 Transport Service Identifier: 4420 00:22:24.489 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:24.489 Transport Address: 192.168.100.8 00:22:24.489 Transport Specific Address Subtype - RDMA 00:22:24.489 RDMA QP Service Type: 1 (Reliable Connected) 00:22:24.489 RDMA Provider Type: 1 (No provider specified) 00:22:24.489 RDMA CM Service: 1 (RDMA_CM) 00:22:24.489 18:07:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:24.750 get_feature(0x01) failed 00:22:24.750 get_feature(0x02) failed 00:22:24.750 get_feature(0x04) failed 00:22:24.750 ===================================================== 00:22:24.750 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:22:24.750 ===================================================== 00:22:24.750 Controller Capabilities/Features 00:22:24.750 ================================ 00:22:24.750 Vendor ID: 0000 00:22:24.750 Subsystem Vendor ID: 0000 00:22:24.750 Serial Number: 54520647943d18deec6f 00:22:24.750 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:24.750 Firmware Version: 6.8.9-20 00:22:24.750 Recommended Arb Burst: 6 00:22:24.750 IEEE OUI Identifier: 00 00 00 00:22:24.751 Multi-path I/O 00:22:24.751 May have multiple subsystem ports: Yes 00:22:24.751 May have multiple controllers: Yes 00:22:24.751 Associated with SR-IOV VF: No 00:22:24.751 Max Data Transfer Size: 1048576 00:22:24.751 Max Number of Namespaces: 1024 00:22:24.751 Max Number of I/O Queues: 128 00:22:24.751 NVMe Specification Version (VS): 1.3 00:22:24.751 NVMe Specification Version (Identify): 1.3 00:22:24.751 Maximum Queue Entries: 128 00:22:24.751 Contiguous Queues Required: No 00:22:24.751 Arbitration Mechanisms Supported 00:22:24.751 Weighted Round Robin: Not Supported 00:22:24.751 Vendor Specific: Not Supported 00:22:24.751 Reset Timeout: 7500 ms 00:22:24.751 Doorbell Stride: 4 bytes 00:22:24.751 NVM Subsystem Reset: Not Supported 00:22:24.751 Command Sets Supported 00:22:24.751 NVM Command Set: Supported 00:22:24.751 Boot Partition: Not Supported 00:22:24.751 Memory Page Size Minimum: 4096 bytes 00:22:24.751 Memory Page Size Maximum: 4096 bytes 00:22:24.751 Persistent Memory Region: Not Supported 00:22:24.751 Optional Asynchronous Events Supported 00:22:24.751 Namespace Attribute Notices: Supported 00:22:24.751 Firmware Activation Notices: Not Supported 00:22:24.751 ANA Change Notices: Supported 00:22:24.751 PLE Aggregate Log Change Notices: Not Supported 00:22:24.751 LBA Status Info Alert Notices: Not Supported 00:22:24.751 EGE Aggregate Log Change Notices: Not Supported 00:22:24.751 Normal NVM Subsystem Shutdown event: Not Supported 00:22:24.751 Zone Descriptor Change Notices: Not Supported 00:22:24.751 Discovery Log Change Notices: Not Supported 00:22:24.751 Controller Attributes 00:22:24.751 128-bit Host Identifier: Supported 00:22:24.751 Non-Operational Permissive Mode: Not Supported 00:22:24.751 NVM Sets: Not Supported 00:22:24.751 Read Recovery Levels: Not Supported 00:22:24.751 Endurance Groups: Not Supported 00:22:24.751 Predictable Latency Mode: Not Supported 00:22:24.751 Traffic Based Keep ALive: Supported 00:22:24.751 Namespace Granularity: Not Supported 00:22:24.751 SQ Associations: Not Supported 00:22:24.751 UUID List: Not Supported 00:22:24.751 Multi-Domain Subsystem: Not Supported 00:22:24.751 Fixed Capacity Management: Not Supported 00:22:24.751 Variable Capacity Management: Not Supported 00:22:24.751 Delete Endurance Group: Not Supported 00:22:24.751 Delete NVM Set: Not Supported 00:22:24.751 Extended LBA Formats Supported: Not Supported 00:22:24.751 Flexible Data Placement Supported: Not Supported 00:22:24.751 00:22:24.751 Controller Memory Buffer Support 00:22:24.751 ================================ 00:22:24.751 Supported: No 00:22:24.751 00:22:24.751 Persistent Memory Region Support 00:22:24.751 ================================ 00:22:24.751 Supported: No 00:22:24.751 00:22:24.751 Admin Command Set Attributes 00:22:24.751 ============================ 00:22:24.751 Security Send/Receive: Not Supported 00:22:24.751 Format NVM: Not Supported 00:22:24.751 Firmware Activate/Download: Not Supported 00:22:24.751 Namespace Management: Not Supported 00:22:24.751 Device Self-Test: Not Supported 00:22:24.751 Directives: Not Supported 00:22:24.751 NVMe-MI: Not Supported 00:22:24.751 Virtualization Management: Not Supported 00:22:24.751 Doorbell Buffer Config: Not Supported 00:22:24.751 Get LBA Status Capability: Not Supported 00:22:24.751 Command & Feature Lockdown Capability: Not Supported 00:22:24.751 Abort Command Limit: 4 00:22:24.751 Async Event Request Limit: 4 00:22:24.751 Number of Firmware Slots: N/A 00:22:24.751 Firmware Slot 1 Read-Only: N/A 00:22:24.751 Firmware Activation Without Reset: N/A 00:22:24.751 Multiple Update Detection Support: N/A 00:22:24.751 Firmware Update Granularity: No Information Provided 00:22:24.751 Per-Namespace SMART Log: Yes 00:22:24.751 Asymmetric Namespace Access Log Page: Supported 00:22:24.751 ANA Transition Time : 10 sec 00:22:24.751 00:22:24.751 Asymmetric Namespace Access Capabilities 00:22:24.751 ANA Optimized State : Supported 00:22:24.751 ANA Non-Optimized State : Supported 00:22:24.751 ANA Inaccessible State : Supported 00:22:24.751 ANA Persistent Loss State : Supported 00:22:24.751 ANA Change State : Supported 00:22:24.751 ANAGRPID is not changed : No 00:22:24.751 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:24.751 00:22:24.751 ANA Group Identifier Maximum : 128 00:22:24.751 Number of ANA Group Identifiers : 128 00:22:24.751 Max Number of Allowed Namespaces : 1024 00:22:24.751 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:24.751 Command Effects Log Page: Supported 00:22:24.751 Get Log Page Extended Data: Supported 00:22:24.751 Telemetry Log Pages: Not Supported 00:22:24.751 Persistent Event Log Pages: Not Supported 00:22:24.751 Supported Log Pages Log Page: May Support 00:22:24.751 Commands Supported & Effects Log Page: Not Supported 00:22:24.751 Feature Identifiers & Effects Log Page:May Support 00:22:24.751 NVMe-MI Commands & Effects Log Page: May Support 00:22:24.751 Data Area 4 for Telemetry Log: Not Supported 00:22:24.751 Error Log Page Entries Supported: 128 00:22:24.751 Keep Alive: Supported 00:22:24.751 Keep Alive Granularity: 1000 ms 00:22:24.751 00:22:24.751 NVM Command Set Attributes 00:22:24.751 ========================== 00:22:24.751 Submission Queue Entry Size 00:22:24.751 Max: 64 00:22:24.751 Min: 64 00:22:24.751 Completion Queue Entry Size 00:22:24.751 Max: 16 00:22:24.751 Min: 16 00:22:24.751 Number of Namespaces: 1024 00:22:24.751 Compare Command: Not Supported 00:22:24.751 Write Uncorrectable Command: Not Supported 00:22:24.751 Dataset Management Command: Supported 00:22:24.751 Write Zeroes Command: Supported 00:22:24.751 Set Features Save Field: Not Supported 00:22:24.751 Reservations: Not Supported 00:22:24.751 Timestamp: Not Supported 00:22:24.751 Copy: Not Supported 00:22:24.751 Volatile Write Cache: Present 00:22:24.751 Atomic Write Unit (Normal): 1 00:22:24.751 Atomic Write Unit (PFail): 1 00:22:24.751 Atomic Compare & Write Unit: 1 00:22:24.751 Fused Compare & Write: Not Supported 00:22:24.751 Scatter-Gather List 00:22:24.751 SGL Command Set: Supported 00:22:24.751 SGL Keyed: Supported 00:22:24.751 SGL Bit Bucket Descriptor: Not Supported 00:22:24.751 SGL Metadata Pointer: Not Supported 00:22:24.751 Oversized SGL: Not Supported 00:22:24.751 SGL Metadata Address: Not Supported 00:22:24.751 SGL Offset: Supported 00:22:24.751 Transport SGL Data Block: Not Supported 00:22:24.751 Replay Protected Memory Block: Not Supported 00:22:24.751 00:22:24.751 Firmware Slot Information 00:22:24.751 ========================= 00:22:24.751 Active slot: 0 00:22:24.751 00:22:24.751 Asymmetric Namespace Access 00:22:24.751 =========================== 00:22:24.751 Change Count : 0 00:22:24.751 Number of ANA Group Descriptors : 1 00:22:24.751 ANA Group Descriptor : 0 00:22:24.751 ANA Group ID : 1 00:22:24.751 Number of NSID Values : 1 00:22:24.751 Change Count : 0 00:22:24.751 ANA State : 1 00:22:24.751 Namespace Identifier : 1 00:22:24.751 00:22:24.751 Commands Supported and Effects 00:22:24.751 ============================== 00:22:24.751 Admin Commands 00:22:24.751 -------------- 00:22:24.751 Get Log Page (02h): Supported 00:22:24.751 Identify (06h): Supported 00:22:24.751 Abort (08h): Supported 00:22:24.751 Set Features (09h): Supported 00:22:24.751 Get Features (0Ah): Supported 00:22:24.751 Asynchronous Event Request (0Ch): Supported 00:22:24.751 Keep Alive (18h): Supported 00:22:24.751 I/O Commands 00:22:24.751 ------------ 00:22:24.751 Flush (00h): Supported 00:22:24.751 Write (01h): Supported LBA-Change 00:22:24.751 Read (02h): Supported 00:22:24.751 Write Zeroes (08h): Supported LBA-Change 00:22:24.751 Dataset Management (09h): Supported 00:22:24.751 00:22:24.751 Error Log 00:22:24.751 ========= 00:22:24.751 Entry: 0 00:22:24.751 Error Count: 0x3 00:22:24.751 Submission Queue Id: 0x0 00:22:24.751 Command Id: 0x5 00:22:24.751 Phase Bit: 0 00:22:24.751 Status Code: 0x2 00:22:24.751 Status Code Type: 0x0 00:22:24.751 Do Not Retry: 1 00:22:24.751 Error Location: 0x28 00:22:24.751 LBA: 0x0 00:22:24.751 Namespace: 0x0 00:22:24.751 Vendor Log Page: 0x0 00:22:24.751 ----------- 00:22:24.751 Entry: 1 00:22:24.751 Error Count: 0x2 00:22:24.751 Submission Queue Id: 0x0 00:22:24.751 Command Id: 0x5 00:22:24.751 Phase Bit: 0 00:22:24.751 Status Code: 0x2 00:22:24.751 Status Code Type: 0x0 00:22:24.751 Do Not Retry: 1 00:22:24.751 Error Location: 0x28 00:22:24.751 LBA: 0x0 00:22:24.751 Namespace: 0x0 00:22:24.751 Vendor Log Page: 0x0 00:22:24.751 ----------- 00:22:24.751 Entry: 2 00:22:24.751 Error Count: 0x1 00:22:24.751 Submission Queue Id: 0x0 00:22:24.751 Command Id: 0x0 00:22:24.751 Phase Bit: 0 00:22:24.751 Status Code: 0x2 00:22:24.751 Status Code Type: 0x0 00:22:24.751 Do Not Retry: 1 00:22:24.751 Error Location: 0x28 00:22:24.751 LBA: 0x0 00:22:24.751 Namespace: 0x0 00:22:24.751 Vendor Log Page: 0x0 00:22:24.751 00:22:24.751 Number of Queues 00:22:24.751 ================ 00:22:24.751 Number of I/O Submission Queues: 128 00:22:24.751 Number of I/O Completion Queues: 128 00:22:24.751 00:22:24.751 ZNS Specific Controller Data 00:22:24.751 ============================ 00:22:24.751 Zone Append Size Limit: 0 00:22:24.751 00:22:24.751 00:22:24.751 Active Namespaces 00:22:24.751 ================= 00:22:24.751 get_feature(0x05) failed 00:22:24.751 Namespace ID:1 00:22:24.751 Command Set Identifier: NVM (00h) 00:22:24.751 Deallocate: Supported 00:22:24.751 Deallocated/Unwritten Error: Not Supported 00:22:24.751 Deallocated Read Value: Unknown 00:22:24.751 Deallocate in Write Zeroes: Not Supported 00:22:24.751 Deallocated Guard Field: 0xFFFF 00:22:24.751 Flush: Supported 00:22:24.751 Reservation: Not Supported 00:22:24.751 Namespace Sharing Capabilities: Multiple Controllers 00:22:24.751 Size (in LBAs): 3750748848 (1788GiB) 00:22:24.751 Capacity (in LBAs): 3750748848 (1788GiB) 00:22:24.751 Utilization (in LBAs): 3750748848 (1788GiB) 00:22:24.751 UUID: b54a722e-4b4b-4db1-876f-5cb9650a1c41 00:22:24.751 Thin Provisioning: Not Supported 00:22:24.751 Per-NS Atomic Units: Yes 00:22:24.751 Atomic Write Unit (Normal): 8 00:22:24.751 Atomic Write Unit (PFail): 8 00:22:24.751 Preferred Write Granularity: 8 00:22:24.751 Atomic Compare & Write Unit: 8 00:22:24.751 Atomic Boundary Size (Normal): 0 00:22:24.751 Atomic Boundary Size (PFail): 0 00:22:24.751 Atomic Boundary Offset: 0 00:22:24.751 NGUID/EUI64 Never Reused: No 00:22:24.751 ANA group ID: 1 00:22:24.751 Namespace Write Protected: No 00:22:24.751 Number of LBA Formats: 1 00:22:24.751 Current LBA Format: LBA Format #00 00:22:24.751 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:24.751 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:24.751 rmmod nvme_rdma 00:22:24.751 rmmod nvme_fabrics 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:22:24.751 18:07:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:22:28.959 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:22:28.959 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:22:29.220 00:22:29.220 real 0m18.790s 00:22:29.220 user 0m5.681s 00:22:29.220 sys 0m12.105s 00:22:29.220 18:07:47 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.220 18:07:47 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.220 ************************************ 00:22:29.220 END TEST nvmf_identify_kernel_target 00:22:29.220 ************************************ 00:22:29.220 18:07:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:29.220 18:07:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:29.220 18:07:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.220 18:07:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.483 ************************************ 00:22:29.483 START TEST nvmf_auth_host 00:22:29.483 ************************************ 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:29.483 * Looking for test storage... 00:22:29.483 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:22:29.483 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.484 --rc genhtml_branch_coverage=1 00:22:29.484 --rc genhtml_function_coverage=1 00:22:29.484 --rc genhtml_legend=1 00:22:29.484 --rc geninfo_all_blocks=1 00:22:29.484 --rc geninfo_unexecuted_blocks=1 00:22:29.484 00:22:29.484 ' 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.484 --rc genhtml_branch_coverage=1 00:22:29.484 --rc genhtml_function_coverage=1 00:22:29.484 --rc genhtml_legend=1 00:22:29.484 --rc geninfo_all_blocks=1 00:22:29.484 --rc geninfo_unexecuted_blocks=1 00:22:29.484 00:22:29.484 ' 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.484 --rc genhtml_branch_coverage=1 00:22:29.484 --rc genhtml_function_coverage=1 00:22:29.484 --rc genhtml_legend=1 00:22:29.484 --rc geninfo_all_blocks=1 00:22:29.484 --rc geninfo_unexecuted_blocks=1 00:22:29.484 00:22:29.484 ' 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.484 --rc genhtml_branch_coverage=1 00:22:29.484 --rc genhtml_function_coverage=1 00:22:29.484 --rc genhtml_legend=1 00:22:29.484 --rc geninfo_all_blocks=1 00:22:29.484 --rc geninfo_unexecuted_blocks=1 00:22:29.484 00:22:29.484 ' 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:29.484 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:29.484 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:29.485 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:29.485 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.485 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.485 18:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.485 18:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:29.485 18:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:29.485 18:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:29.485 18:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:39.485 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:39.485 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:39.485 Found net devices under 0000:98:00.0: mlx_0_0 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:39.485 Found net devices under 0000:98:00.1: mlx_0_1 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:22:39.485 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:39.486 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:39.486 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:39.486 altname enp152s0f0np0 00:22:39.486 altname ens817f0np0 00:22:39.486 inet 192.168.100.8/24 scope global mlx_0_0 00:22:39.486 valid_lft forever preferred_lft forever 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:39.486 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:39.486 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:39.486 altname enp152s0f1np1 00:22:39.486 altname ens817f1np1 00:22:39.486 inet 192.168.100.9/24 scope global mlx_0_1 00:22:39.486 valid_lft forever preferred_lft forever 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:39.486 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:39.487 192.168.100.9' 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:39.487 192.168.100.9' 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:39.487 192.168.100.9' 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3657672 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3657672 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3657672 ']' 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.487 18:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=02837ae58fc66972e0d3959286cf6b49 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FdN 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 02837ae58fc66972e0d3959286cf6b49 0 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 02837ae58fc66972e0d3959286cf6b49 0 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=02837ae58fc66972e0d3959286cf6b49 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FdN 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FdN 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.FdN 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0f901d1818f9f55c41e94a7c4f2f5abf55fa54c8478b2253353d609862b289e8 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CeJ 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0f901d1818f9f55c41e94a7c4f2f5abf55fa54c8478b2253353d609862b289e8 3 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0f901d1818f9f55c41e94a7c4f2f5abf55fa54c8478b2253353d609862b289e8 3 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0f901d1818f9f55c41e94a7c4f2f5abf55fa54c8478b2253353d609862b289e8 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CeJ 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CeJ 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.CeJ 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4a117b026489642fb95e56a7a863f8fadc8407de38ef6bd2 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.M55 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4a117b026489642fb95e56a7a863f8fadc8407de38ef6bd2 0 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4a117b026489642fb95e56a7a863f8fadc8407de38ef6bd2 0 00:22:39.487 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4a117b026489642fb95e56a7a863f8fadc8407de38ef6bd2 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.M55 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.M55 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.M55 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8e950217cb9ea058456cc0791734597b0c7e1296d51dd272 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.aEk 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8e950217cb9ea058456cc0791734597b0c7e1296d51dd272 2 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8e950217cb9ea058456cc0791734597b0c7e1296d51dd272 2 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8e950217cb9ea058456cc0791734597b0c7e1296d51dd272 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.aEk 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.aEk 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.aEk 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6a66434fd6fc31eef5f6da29d8a8a073 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TRM 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6a66434fd6fc31eef5f6da29d8a8a073 1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6a66434fd6fc31eef5f6da29d8a8a073 1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6a66434fd6fc31eef5f6da29d8a8a073 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TRM 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TRM 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.TRM 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5c787616f8825e27333429e3b68c38b2 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4C0 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5c787616f8825e27333429e3b68c38b2 1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5c787616f8825e27333429e3b68c38b2 1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5c787616f8825e27333429e3b68c38b2 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4C0 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4C0 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4C0 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=46c602abfb85dfec924e9cbb1ced376135152d23f17285f3 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.c3i 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 46c602abfb85dfec924e9cbb1ced376135152d23f17285f3 2 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 46c602abfb85dfec924e9cbb1ced376135152d23f17285f3 2 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=46c602abfb85dfec924e9cbb1ced376135152d23f17285f3 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.c3i 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.c3i 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.c3i 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d7a0ac2389390c7fd2f18afe2c22e655 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.th9 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d7a0ac2389390c7fd2f18afe2c22e655 0 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d7a0ac2389390c7fd2f18afe2c22e655 0 00:22:39.488 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d7a0ac2389390c7fd2f18afe2c22e655 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.th9 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.th9 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.th9 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=91a10bbb08f9c20c7bc4fb5960dca9149de0fe81fbb8ce49b5a499d2e4499794 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dPa 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 91a10bbb08f9c20c7bc4fb5960dca9149de0fe81fbb8ce49b5a499d2e4499794 3 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 91a10bbb08f9c20c7bc4fb5960dca9149de0fe81fbb8ce49b5a499d2e4499794 3 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=91a10bbb08f9c20c7bc4fb5960dca9149de0fe81fbb8ce49b5a499d2e4499794 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dPa 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dPa 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.dPa 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3657672 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3657672 ']' 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.489 18:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FdN 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.CeJ ]] 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CeJ 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.749 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.M55 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.aEk ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aEk 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.TRM 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4C0 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4C0 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.c3i 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.th9 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.th9 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.dPa 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:39.750 18:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:43.952 Waiting for block devices as requested 00:22:43.952 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:22:43.952 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:22:43.952 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:22:43.952 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:22:43.952 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:22:43.952 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:22:43.952 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:22:44.212 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:22:44.212 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:22:44.471 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:22:44.471 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:22:44.471 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:22:44.471 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:22:44.730 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:22:44.730 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:22:44.730 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:22:44.989 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:45.986 No valid GPT data, bailing 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -t rdma -s 4420 00:22:45.986 00:22:45.986 Discovery Log Number of Records 2, Generation counter 2 00:22:45.986 =====Discovery Log Entry 0====== 00:22:45.986 trtype: rdma 00:22:45.986 adrfam: ipv4 00:22:45.986 subtype: current discovery subsystem 00:22:45.986 treq: not specified, sq flow control disable supported 00:22:45.986 portid: 1 00:22:45.986 trsvcid: 4420 00:22:45.986 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:45.986 traddr: 192.168.100.8 00:22:45.986 eflags: none 00:22:45.986 rdma_prtype: not specified 00:22:45.986 rdma_qptype: connected 00:22:45.986 rdma_cms: rdma-cm 00:22:45.986 rdma_pkey: 0x0000 00:22:45.986 =====Discovery Log Entry 1====== 00:22:45.986 trtype: rdma 00:22:45.986 adrfam: ipv4 00:22:45.986 subtype: nvme subsystem 00:22:45.986 treq: not specified, sq flow control disable supported 00:22:45.986 portid: 1 00:22:45.986 trsvcid: 4420 00:22:45.986 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:45.986 traddr: 192.168.100.8 00:22:45.986 eflags: none 00:22:45.986 rdma_prtype: not specified 00:22:45.986 rdma_qptype: connected 00:22:45.986 rdma_cms: rdma-cm 00:22:45.986 rdma_pkey: 0x0000 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.986 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:45.987 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.309 nvme0n1 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.309 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.597 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.597 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:46.597 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.598 18:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.598 nvme0n1 00:22:46.598 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.598 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.598 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.598 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.598 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.860 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 nvme0n1 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.121 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.382 nvme0n1 00:22:47.382 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.382 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.382 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.382 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.382 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.383 18:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.644 nvme0n1 00:22:47.644 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.644 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.644 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.645 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.645 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.645 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.906 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.906 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.906 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.906 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.906 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.906 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.907 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.167 nvme0n1 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.167 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.428 nvme0n1 00:22:48.428 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.428 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.428 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.428 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.428 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.428 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.428 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.428 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.428 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.428 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.688 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.689 18:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.948 nvme0n1 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:22:48.948 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.949 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.222 nvme0n1 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.222 18:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.793 nvme0n1 00:22:49.793 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.793 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.793 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.793 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.793 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.794 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.055 nvme0n1 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.055 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.627 nvme0n1 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.627 18:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.888 nvme0n1 00:22:50.888 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.888 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.888 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.888 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.888 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.888 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.888 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.888 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.888 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.888 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.149 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.410 nvme0n1 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:51.410 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:51.411 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:51.411 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:51.411 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:51.411 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.411 18:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.982 nvme0n1 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.982 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.554 nvme0n1 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:52.554 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:52.555 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.555 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.555 18:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.127 nvme0n1 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.127 18:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.700 nvme0n1 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.700 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.272 nvme0n1 00:22:54.272 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.272 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.272 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.272 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.272 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.272 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.272 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.272 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.272 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.272 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.533 18:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.105 nvme0n1 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.105 18:08:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.678 nvme0n1 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.678 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.621 nvme0n1 00:22:56.621 18:08:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.621 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.564 nvme0n1 00:22:57.564 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.564 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.564 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.564 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.564 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.564 18:08:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.564 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.503 nvme0n1 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:58.503 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.504 18:08:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.504 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.445 nvme0n1 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.445 18:08:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.384 nvme0n1 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.384 18:08:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.645 nvme0n1 00:23:00.645 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.645 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.645 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.645 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.645 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.645 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.645 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.645 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.645 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.645 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.925 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.925 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.925 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:00.925 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.925 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:00.925 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.925 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.926 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.187 nvme0n1 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.187 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.449 nvme0n1 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:01.449 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.450 18:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.712 nvme0n1 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:01.712 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:01.972 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.973 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 nvme0n1 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:02.234 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.235 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.495 nvme0n1 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.495 18:08:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.756 nvme0n1 00:23:02.756 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.756 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.756 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.756 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.757 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.017 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.278 nvme0n1 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.278 18:08:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.540 nvme0n1 00:23:03.540 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.540 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.540 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.540 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.540 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.540 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.801 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.802 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.063 nvme0n1 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.063 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.635 nvme0n1 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.635 18:08:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.898 nvme0n1 00:23:04.898 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.898 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.898 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.898 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.898 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.898 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.898 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.898 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.898 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.898 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.159 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.420 nvme0n1 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.420 18:08:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.992 nvme0n1 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:05.992 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.993 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.254 nvme0n1 00:23:06.254 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.254 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.254 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.254 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.254 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.254 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.515 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.515 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.515 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.515 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.515 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.515 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.515 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.516 18:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.088 nvme0n1 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.088 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.089 18:08:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.660 nvme0n1 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.660 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.231 nvme0n1 00:23:08.231 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.231 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.231 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.231 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.231 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.231 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.492 18:08:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.061 nvme0n1 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:09.061 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.062 18:08:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.631 nvme0n1 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:09.631 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.632 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.572 nvme0n1 00:23:10.572 18:08:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:10.572 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.573 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.535 nvme0n1 00:23:11.535 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.535 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.535 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.535 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.535 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.535 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.535 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.535 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.535 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.535 18:08:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.535 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.476 nvme0n1 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.476 18:08:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.416 nvme0n1 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.416 18:08:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.355 nvme0n1 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.355 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:14.356 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.616 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.616 18:08:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.616 nvme0n1 00:23:14.616 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.616 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.616 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.616 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.616 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.616 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.876 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.137 nvme0n1 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.137 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.398 nvme0n1 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:15.398 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.399 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.399 18:08:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.659 nvme0n1 00:23:15.659 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.659 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.660 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.660 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.660 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.660 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.920 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.180 nvme0n1 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:16.180 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.181 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.441 nvme0n1 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.441 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.700 18:08:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.959 nvme0n1 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.959 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.219 nvme0n1 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.219 18:08:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.479 nvme0n1 00:23:17.479 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.479 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.738 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.997 nvme0n1 00:23:17.997 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.998 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.567 nvme0n1 00:23:18.567 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.568 18:08:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.828 nvme0n1 00:23:18.828 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.828 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.828 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.828 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.828 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.828 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.088 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 nvme0n1 00:23:19.348 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.348 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.348 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.348 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.348 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.348 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.348 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.348 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.348 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:23:19.608 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.609 18:08:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.869 nvme0n1 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.869 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.440 nvme0n1 00:23:20.440 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.440 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.441 18:08:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.011 nvme0n1 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.011 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.012 18:08:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.583 nvme0n1 00:23:21.583 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.583 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.583 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.583 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.583 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.583 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.583 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.583 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.583 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.583 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:21.843 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.844 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.844 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 nvme0n1 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.413 18:08:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.983 nvme0n1 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.983 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.984 18:08:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.553 nvme0n1 00:23:23.553 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.553 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.553 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.553 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.553 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.553 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.553 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.553 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.553 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.553 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI4MzdhZTU4ZmM2Njk3MmUwZDM5NTkyODZjZjZiNDkDnDhm: 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: ]] 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MDFkMTgxOGY5ZjU1YzQxZTk0YTdjNGYyZjVhYmY1NWZhNTRjODQ3OGIyMjUzMzUzZDYwOTg2MmIyODllONEYyfY=: 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.814 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.755 nvme0n1 00:23:24.755 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.755 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.755 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.755 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.755 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.755 18:08:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.755 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.449 nvme0n1 00:23:25.449 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.449 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.449 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.449 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.449 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.449 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.449 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.449 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.449 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.449 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.727 18:08:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.727 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.667 nvme0n1 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjNjAyYWJmYjg1ZGZlYzkyNGU5Y2JiMWNlZDM3NjEzNTE1MmQyM2YxNzI4NWYzKgz9qA==: 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: ]] 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDdhMGFjMjM4OTM5MGM3ZmQyZjE4YWZlMmMyMmU2NTVfmFjx: 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.668 18:08:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.608 nvme0n1 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFhMTBiYmIwOGY5YzIwYzdiYzRmYjU5NjBkY2E5MTQ5ZGUwZmU4MWZiYjhjZTQ5YjVhNDk5ZDJlNDQ5OTc5NIHKLkE=: 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.608 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.609 18:08:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 nvme0n1 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.549 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 request: 00:23:28.549 { 00:23:28.549 "name": "nvme0", 00:23:28.549 "trtype": "rdma", 00:23:28.549 "traddr": "192.168.100.8", 00:23:28.549 "adrfam": "ipv4", 00:23:28.549 "trsvcid": "4420", 00:23:28.549 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:28.549 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:28.549 "prchk_reftag": false, 00:23:28.549 "prchk_guard": false, 00:23:28.549 "hdgst": false, 00:23:28.549 "ddgst": false, 00:23:28.549 "allow_unrecognized_csi": false, 00:23:28.549 "method": "bdev_nvme_attach_controller", 00:23:28.549 "req_id": 1 00:23:28.549 } 00:23:28.549 Got JSON-RPC error response 00:23:28.549 response: 00:23:28.550 { 00:23:28.550 "code": -5, 00:23:28.550 "message": "Input/output error" 00:23:28.550 } 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.550 18:08:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 request: 00:23:28.810 { 00:23:28.810 "name": "nvme0", 00:23:28.810 "trtype": "rdma", 00:23:28.810 "traddr": "192.168.100.8", 00:23:28.810 "adrfam": "ipv4", 00:23:28.810 "trsvcid": "4420", 00:23:28.810 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:28.810 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:28.810 "prchk_reftag": false, 00:23:28.810 "prchk_guard": false, 00:23:28.810 "hdgst": false, 00:23:28.810 "ddgst": false, 00:23:28.810 "dhchap_key": "key2", 00:23:28.810 "allow_unrecognized_csi": false, 00:23:28.810 "method": "bdev_nvme_attach_controller", 00:23:28.810 "req_id": 1 00:23:28.810 } 00:23:28.810 Got JSON-RPC error response 00:23:28.810 response: 00:23:28.810 { 00:23:28.810 "code": -5, 00:23:28.810 "message": "Input/output error" 00:23:28.810 } 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:28.810 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.811 request: 00:23:28.811 { 00:23:28.811 "name": "nvme0", 00:23:28.811 "trtype": "rdma", 00:23:28.811 "traddr": "192.168.100.8", 00:23:28.811 "adrfam": "ipv4", 00:23:28.811 "trsvcid": "4420", 00:23:28.811 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:28.811 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:28.811 "prchk_reftag": false, 00:23:28.811 "prchk_guard": false, 00:23:28.811 "hdgst": false, 00:23:28.811 "ddgst": false, 00:23:28.811 "dhchap_key": "key1", 00:23:28.811 "dhchap_ctrlr_key": "ckey2", 00:23:28.811 "allow_unrecognized_csi": false, 00:23:28.811 "method": "bdev_nvme_attach_controller", 00:23:28.811 "req_id": 1 00:23:28.811 } 00:23:28.811 Got JSON-RPC error response 00:23:28.811 response: 00:23:28.811 { 00:23:28.811 "code": -5, 00:23:28.811 "message": "Input/output error" 00:23:28.811 } 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.811 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.071 nvme0n1 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.071 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.332 request: 00:23:29.332 { 00:23:29.332 "name": "nvme0", 00:23:29.332 "dhchap_key": "key1", 00:23:29.332 "dhchap_ctrlr_key": "ckey2", 00:23:29.332 "method": "bdev_nvme_set_keys", 00:23:29.332 "req_id": 1 00:23:29.332 } 00:23:29.332 Got JSON-RPC error response 00:23:29.332 response: 00:23:29.332 { 00:23:29.332 "code": -13, 00:23:29.332 "message": "Permission denied" 00:23:29.332 } 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:29.332 18:08:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:30.728 18:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.728 18:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:30.729 18:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.729 18:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.729 18:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.729 18:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:30.729 18:08:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExMTdiMDI2NDg5NjQyZmI5NWU1NmE3YTg2M2Y4ZmFkYzg0MDdkZTM4ZWY2YmQyyXzyww==: 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: ]] 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU5NTAyMTdjYjllYTA1ODQ1NmNjMDc5MTczNDU5N2IwYzdlMTI5NmQ1MWRkMjcyoo8crw==: 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.675 18:08:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.675 nvme0n1 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmE2NjQzNGZkNmZjMzFlZWY1ZjZkYTI5ZDhhOGEwNzM+A4f+: 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: ]] 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM3ODc2MTZmODgyNWUyNzMzMzQyOWUzYjY4YzM4YjJMMYHP: 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:31.675 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:31.676 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:31.676 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:31.676 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.676 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:31.676 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.676 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:31.676 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.676 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.936 request: 00:23:31.936 { 00:23:31.936 "name": "nvme0", 00:23:31.937 "dhchap_key": "key2", 00:23:31.937 "dhchap_ctrlr_key": "ckey1", 00:23:31.937 "method": "bdev_nvme_set_keys", 00:23:31.937 "req_id": 1 00:23:31.937 } 00:23:31.937 Got JSON-RPC error response 00:23:31.937 response: 00:23:31.937 { 00:23:31.937 "code": -13, 00:23:31.937 "message": "Permission denied" 00:23:31.937 } 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:31.937 18:08:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:32.883 18:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.883 18:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:32.883 18:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.883 18:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.883 18:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.883 18:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:32.883 18:08:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:33.823 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.823 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:33.823 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.823 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.823 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:34.085 rmmod nvme_rdma 00:23:34.085 rmmod nvme_fabrics 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3657672 ']' 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3657672 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3657672 ']' 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3657672 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3657672 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3657672' 00:23:34.085 killing process with pid 3657672 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3657672 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3657672 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:34.085 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:34.346 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:34.346 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:34.346 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:23:34.346 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:34.346 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:34.347 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:34.347 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:34.347 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:34.347 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:23:34.347 18:08:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:38.555 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:38.555 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:23:38.555 18:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.FdN /tmp/spdk.key-null.M55 /tmp/spdk.key-sha256.TRM /tmp/spdk.key-sha384.c3i /tmp/spdk.key-sha512.dPa /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:23:38.555 18:08:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:42.761 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:23:42.761 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:23:42.761 00:23:42.761 real 1m13.293s 00:23:42.761 user 1m6.963s 00:23:42.761 sys 0m17.684s 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.761 ************************************ 00:23:42.761 END TEST nvmf_auth_host 00:23:42.761 ************************************ 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.761 ************************************ 00:23:42.761 START TEST nvmf_bdevperf 00:23:42.761 ************************************ 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:42.761 * Looking for test storage... 00:23:42.761 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.761 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:43.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.023 --rc genhtml_branch_coverage=1 00:23:43.023 --rc genhtml_function_coverage=1 00:23:43.023 --rc genhtml_legend=1 00:23:43.023 --rc geninfo_all_blocks=1 00:23:43.023 --rc geninfo_unexecuted_blocks=1 00:23:43.023 00:23:43.023 ' 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:43.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.023 --rc genhtml_branch_coverage=1 00:23:43.023 --rc genhtml_function_coverage=1 00:23:43.023 --rc genhtml_legend=1 00:23:43.023 --rc geninfo_all_blocks=1 00:23:43.023 --rc geninfo_unexecuted_blocks=1 00:23:43.023 00:23:43.023 ' 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:43.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.023 --rc genhtml_branch_coverage=1 00:23:43.023 --rc genhtml_function_coverage=1 00:23:43.023 --rc genhtml_legend=1 00:23:43.023 --rc geninfo_all_blocks=1 00:23:43.023 --rc geninfo_unexecuted_blocks=1 00:23:43.023 00:23:43.023 ' 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:43.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.023 --rc genhtml_branch_coverage=1 00:23:43.023 --rc genhtml_function_coverage=1 00:23:43.023 --rc genhtml_legend=1 00:23:43.023 --rc geninfo_all_blocks=1 00:23:43.023 --rc geninfo_unexecuted_blocks=1 00:23:43.023 00:23:43.023 ' 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.023 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.024 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.024 18:09:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.166 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:23:51.167 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:23:51.167 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:23:51.167 Found net devices under 0000:98:00.0: mlx_0_0 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:23:51.167 Found net devices under 0000:98:00.1: mlx_0_1 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:51.167 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:51.167 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:51.167 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:23:51.167 altname enp152s0f0np0 00:23:51.168 altname ens817f0np0 00:23:51.168 inet 192.168.100.8/24 scope global mlx_0_0 00:23:51.168 valid_lft forever preferred_lft forever 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:51.168 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:51.168 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:23:51.168 altname enp152s0f1np1 00:23:51.168 altname ens817f1np1 00:23:51.168 inet 192.168.100.9/24 scope global mlx_0_1 00:23:51.168 valid_lft forever preferred_lft forever 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:51.168 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:51.429 192.168.100.9' 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:51.429 192.168.100.9' 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:51.429 192.168.100.9' 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3678421 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3678421 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3678421 ']' 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.429 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.430 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.430 18:09:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:51.430 [2024-11-20 18:09:09.843615] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:23:51.430 [2024-11-20 18:09:09.843669] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.430 [2024-11-20 18:09:09.942894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:51.689 [2024-11-20 18:09:09.990116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.689 [2024-11-20 18:09:09.990170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.689 [2024-11-20 18:09:09.990178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.689 [2024-11-20 18:09:09.990185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.689 [2024-11-20 18:09:09.990191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.689 [2024-11-20 18:09:09.992135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.690 [2024-11-20 18:09:09.992408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.690 [2024-11-20 18:09:09.992409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.258 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.258 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:23:52.258 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.258 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.258 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.258 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.258 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:52.258 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.258 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.258 [2024-11-20 18:09:10.736280] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20ef650/0x20f3b40) succeed. 00:23:52.258 [2024-11-20 18:09:10.750767] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20f0c40/0x21351e0) succeed. 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.518 Malloc0 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.518 [2024-11-20 18:09:10.909176] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.518 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:52.519 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:52.519 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:23:52.519 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:23:52.519 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:52.519 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:52.519 { 00:23:52.519 "params": { 00:23:52.519 "name": "Nvme$subsystem", 00:23:52.519 "trtype": "$TEST_TRANSPORT", 00:23:52.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.519 "adrfam": "ipv4", 00:23:52.519 "trsvcid": "$NVMF_PORT", 00:23:52.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.519 "hdgst": ${hdgst:-false}, 00:23:52.519 "ddgst": ${ddgst:-false} 00:23:52.519 }, 00:23:52.519 "method": "bdev_nvme_attach_controller" 00:23:52.519 } 00:23:52.519 EOF 00:23:52.519 )") 00:23:52.519 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:23:52.519 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:23:52.519 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:23:52.519 18:09:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:52.519 "params": { 00:23:52.519 "name": "Nvme1", 00:23:52.519 "trtype": "rdma", 00:23:52.519 "traddr": "192.168.100.8", 00:23:52.519 "adrfam": "ipv4", 00:23:52.519 "trsvcid": "4420", 00:23:52.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.519 "hdgst": false, 00:23:52.519 "ddgst": false 00:23:52.519 }, 00:23:52.519 "method": "bdev_nvme_attach_controller" 00:23:52.519 }' 00:23:52.519 [2024-11-20 18:09:10.963583] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:23:52.519 [2024-11-20 18:09:10.963631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678687 ] 00:23:52.519 [2024-11-20 18:09:11.039901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.779 [2024-11-20 18:09:11.076477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.779 Running I/O for 1 seconds... 00:23:54.162 14847.00 IOPS, 58.00 MiB/s 00:23:54.162 Latency(us) 00:23:54.162 [2024-11-20T17:09:12.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.162 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:54.162 Verification LBA range: start 0x0 length 0x4000 00:23:54.162 Nvme1n1 : 1.01 14876.88 58.11 0.00 0.00 8557.39 2211.84 18896.21 00:23:54.162 [2024-11-20T17:09:12.698Z] =================================================================================================================== 00:23:54.162 [2024-11-20T17:09:12.698Z] Total : 14876.88 58.11 0.00 0.00 8557.39 2211.84 18896.21 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3678939 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.162 { 00:23:54.162 "params": { 00:23:54.162 "name": "Nvme$subsystem", 00:23:54.162 "trtype": "$TEST_TRANSPORT", 00:23:54.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.162 "adrfam": "ipv4", 00:23:54.162 "trsvcid": "$NVMF_PORT", 00:23:54.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.162 "hdgst": ${hdgst:-false}, 00:23:54.162 "ddgst": ${ddgst:-false} 00:23:54.162 }, 00:23:54.162 "method": "bdev_nvme_attach_controller" 00:23:54.162 } 00:23:54.162 EOF 00:23:54.162 )") 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:23:54.162 18:09:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:54.162 "params": { 00:23:54.162 "name": "Nvme1", 00:23:54.162 "trtype": "rdma", 00:23:54.162 "traddr": "192.168.100.8", 00:23:54.162 "adrfam": "ipv4", 00:23:54.162 "trsvcid": "4420", 00:23:54.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.162 "hdgst": false, 00:23:54.162 "ddgst": false 00:23:54.162 }, 00:23:54.162 "method": "bdev_nvme_attach_controller" 00:23:54.162 }' 00:23:54.162 [2024-11-20 18:09:12.467050] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:23:54.162 [2024-11-20 18:09:12.467108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678939 ] 00:23:54.163 [2024-11-20 18:09:12.543748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.163 [2024-11-20 18:09:12.579569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.422 Running I/O for 15 seconds... 00:23:56.307 16640.00 IOPS, 65.00 MiB/s [2024-11-20T17:09:15.784Z] 16800.00 IOPS, 65.62 MiB/s [2024-11-20T17:09:15.784Z] 18:09:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3678421 00:23:57.249 18:09:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:23:58.193 14920.67 IOPS, 58.28 MiB/s [2024-11-20T17:09:16.729Z] [2024-11-20 18:09:16.452409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.452988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.452995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.453005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.453012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.453022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.453029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.193 [2024-11-20 18:09:16.453039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x180b00 00:23:58.193 [2024-11-20 18:09:16.453047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x180b00 00:23:58.194 [2024-11-20 18:09:16.453064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x180b00 00:23:58.194 [2024-11-20 18:09:16.453082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x180b00 00:23:58.194 [2024-11-20 18:09:16.453099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x180b00 00:23:58.194 [2024-11-20 18:09:16.453116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.194 [2024-11-20 18:09:16.453699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.194 [2024-11-20 18:09:16.453706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.453988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.453995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.195 [2024-11-20 18:09:16.454366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.195 [2024-11-20 18:09:16.454375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.454614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.196 [2024-11-20 18:09:16.454622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b6428000 sqhd:7250 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.456620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.196 [2024-11-20 18:09:16.456633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.196 [2024-11-20 18:09:16.456640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:23:58.196 [2024-11-20 18:09:16.456649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.196 [2024-11-20 18:09:16.460450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:58.196 [2024-11-20 18:09:16.480254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:58.196 [2024-11-20 18:09:16.483598] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:58.196 [2024-11-20 18:09:16.483624] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:58.196 [2024-11-20 18:09:16.483631] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:23:59.029 11190.50 IOPS, 43.71 MiB/s [2024-11-20T17:09:17.565Z] [2024-11-20 18:09:17.487894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:59.029 [2024-11-20 18:09:17.487943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:59.029 [2024-11-20 18:09:17.488515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:59.029 [2024-11-20 18:09:17.488524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:59.029 [2024-11-20 18:09:17.488532] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:23:59.029 [2024-11-20 18:09:17.488541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:59.029 [2024-11-20 18:09:17.492292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:59.029 [2024-11-20 18:09:17.496129] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:59.029 [2024-11-20 18:09:17.496147] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:59.029 [2024-11-20 18:09:17.496153] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:24:00.233 8952.40 IOPS, 34.97 MiB/s [2024-11-20T17:09:18.769Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3678421 Killed "${NVMF_APP[@]}" "$@" 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3680265 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3680265 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3680265 ']' 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.233 18:09:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.233 [2024-11-20 18:09:18.485029] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:24:00.233 [2024-11-20 18:09:18.485084] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.233 [2024-11-20 18:09:18.500706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:00.233 [2024-11-20 18:09:18.500729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:00.233 [2024-11-20 18:09:18.500953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:00.233 [2024-11-20 18:09:18.500963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:00.233 [2024-11-20 18:09:18.500971] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:24:00.233 [2024-11-20 18:09:18.500981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:00.233 [2024-11-20 18:09:18.512375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:00.233 [2024-11-20 18:09:18.515972] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:00.233 [2024-11-20 18:09:18.515990] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:00.233 [2024-11-20 18:09:18.515996] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:24:00.233 [2024-11-20 18:09:18.574214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:00.233 [2024-11-20 18:09:18.603481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.233 [2024-11-20 18:09:18.603508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.233 [2024-11-20 18:09:18.603514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.233 [2024-11-20 18:09:18.603520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.233 [2024-11-20 18:09:18.603524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.233 [2024-11-20 18:09:18.604594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.233 [2024-11-20 18:09:18.604753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.233 [2024-11-20 18:09:18.604754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.064 7460.33 IOPS, 29.14 MiB/s [2024-11-20T17:09:19.600Z] 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.064 [2024-11-20 18:09:19.368828] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x237c650/0x2380b40) succeed. 00:24:01.064 [2024-11-20 18:09:19.380782] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x237dc40/0x23c21e0) succeed. 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.064 Malloc0 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.064 [2024-11-20 18:09:19.519902] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:01.064 [2024-11-20 18:09:19.520724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.064 [2024-11-20 18:09:19.520752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:01.064 [2024-11-20 18:09:19.520976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:01.064 [2024-11-20 18:09:19.520986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:01.064 [2024-11-20 18:09:19.520993] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:24:01.064 [2024-11-20 18:09:19.521002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.064 18:09:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3678939 00:24:01.064 [2024-11-20 18:09:19.530743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:01.064 [2024-11-20 18:09:19.582406] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:24:02.267 6860.71 IOPS, 26.80 MiB/s [2024-11-20T17:09:22.188Z] 8119.62 IOPS, 31.72 MiB/s [2024-11-20T17:09:23.129Z] 9081.00 IOPS, 35.47 MiB/s [2024-11-20T17:09:24.077Z] 9859.70 IOPS, 38.51 MiB/s [2024-11-20T17:09:25.018Z] 10491.64 IOPS, 40.98 MiB/s [2024-11-20T17:09:25.954Z] 11025.25 IOPS, 43.07 MiB/s [2024-11-20T17:09:26.892Z] 11479.54 IOPS, 44.84 MiB/s [2024-11-20T17:09:27.832Z] 11862.36 IOPS, 46.34 MiB/s 00:24:09.296 Latency(us) 00:24:09.296 [2024-11-20T17:09:27.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.296 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:09.296 Verification LBA range: start 0x0 length 0x4000 00:24:09.296 Nvme1n1 : 15.00 12189.26 47.61 7824.85 0.00 6368.45 358.40 1034594.99 00:24:09.296 [2024-11-20T17:09:27.832Z] =================================================================================================================== 00:24:09.296 [2024-11-20T17:09:27.832Z] Total : 12189.26 47.61 7824.85 0.00 6368.45 358.40 1034594.99 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:09.556 rmmod nvme_rdma 00:24:09.556 rmmod nvme_fabrics 00:24:09.556 18:09:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3680265 ']' 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3680265 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3680265 ']' 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3680265 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3680265 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3680265' 00:24:09.556 killing process with pid 3680265 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3680265 00:24:09.556 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3680265 00:24:09.816 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.816 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:09.816 00:24:09.816 real 0m27.139s 00:24:09.816 user 1m4.592s 00:24:09.816 sys 0m7.328s 00:24:09.816 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.816 18:09:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.816 ************************************ 00:24:09.816 END TEST nvmf_bdevperf 00:24:09.816 ************************************ 00:24:09.816 18:09:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:09.816 18:09:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:09.816 18:09:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.816 18:09:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.816 ************************************ 00:24:09.816 START TEST nvmf_target_disconnect 00:24:09.816 ************************************ 00:24:09.816 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:10.077 * Looking for test storage... 00:24:10.077 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:10.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.077 --rc genhtml_branch_coverage=1 00:24:10.077 --rc genhtml_function_coverage=1 00:24:10.077 --rc genhtml_legend=1 00:24:10.077 --rc geninfo_all_blocks=1 00:24:10.077 --rc geninfo_unexecuted_blocks=1 00:24:10.077 00:24:10.077 ' 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:10.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.077 --rc genhtml_branch_coverage=1 00:24:10.077 --rc genhtml_function_coverage=1 00:24:10.077 --rc genhtml_legend=1 00:24:10.077 --rc geninfo_all_blocks=1 00:24:10.077 --rc geninfo_unexecuted_blocks=1 00:24:10.077 00:24:10.077 ' 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:10.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.077 --rc genhtml_branch_coverage=1 00:24:10.077 --rc genhtml_function_coverage=1 00:24:10.077 --rc genhtml_legend=1 00:24:10.077 --rc geninfo_all_blocks=1 00:24:10.077 --rc geninfo_unexecuted_blocks=1 00:24:10.077 00:24:10.077 ' 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:10.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.077 --rc genhtml_branch_coverage=1 00:24:10.077 --rc genhtml_function_coverage=1 00:24:10.077 --rc genhtml_legend=1 00:24:10.077 --rc geninfo_all_blocks=1 00:24:10.077 --rc geninfo_unexecuted_blocks=1 00:24:10.077 00:24:10.077 ' 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.077 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.078 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:24:10.078 18:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:20.076 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:20.076 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:20.076 Found net devices under 0000:98:00.0: mlx_0_0 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:20.076 Found net devices under 0000:98:00.1: mlx_0_1 00:24:20.076 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:20.077 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.077 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:24:20.077 altname enp152s0f0np0 00:24:20.077 altname ens817f0np0 00:24:20.077 inet 192.168.100.8/24 scope global mlx_0_0 00:24:20.077 valid_lft forever preferred_lft forever 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:20.077 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.077 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:24:20.077 altname enp152s0f1np1 00:24:20.077 altname ens817f1np1 00:24:20.077 inet 192.168.100.9/24 scope global mlx_0_1 00:24:20.077 valid_lft forever preferred_lft forever 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:20.077 192.168.100.9' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:20.077 192.168.100.9' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:20.077 192.168.100.9' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:20.077 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:20.078 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:20.078 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:20.078 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:20.078 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:20.078 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.078 18:09:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:20.078 ************************************ 00:24:20.078 START TEST nvmf_target_disconnect_tc1 00:24:20.078 ************************************ 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:24:20.078 18:09:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:20.078 [2024-11-20 18:09:37.149904] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:20.078 [2024-11-20 18:09:37.149944] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:20.078 [2024-11-20 18:09:37.149953] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:24:20.078 [2024-11-20 18:09:38.154304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:24:20.078 [2024-11-20 18:09:38.154366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:24:20.078 [2024-11-20 18:09:38.154393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:24:20.078 [2024-11-20 18:09:38.154445] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:20.078 [2024-11-20 18:09:38.154465] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:24:20.078 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:24:20.078 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:20.078 Initializing NVMe Controllers 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:20.078 00:24:20.078 real 0m1.155s 00:24:20.078 user 0m0.954s 00:24:20.078 sys 0m0.182s 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:20.078 ************************************ 00:24:20.078 END TEST nvmf_target_disconnect_tc1 00:24:20.078 ************************************ 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:20.078 ************************************ 00:24:20.078 START TEST nvmf_target_disconnect_tc2 00:24:20.078 ************************************ 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3686660 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3686660 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3686660 ']' 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.078 18:09:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.078 [2024-11-20 18:09:38.305686] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:24:20.078 [2024-11-20 18:09:38.305736] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.078 [2024-11-20 18:09:38.403986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.078 [2024-11-20 18:09:38.455833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.078 [2024-11-20 18:09:38.455897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.078 [2024-11-20 18:09:38.455906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.078 [2024-11-20 18:09:38.455913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.078 [2024-11-20 18:09:38.455920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.078 [2024-11-20 18:09:38.458401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:20.078 [2024-11-20 18:09:38.458563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:20.078 [2024-11-20 18:09:38.458720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:20.078 [2024-11-20 18:09:38.458720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:20.670 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.670 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:20.670 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.670 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.670 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.670 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.670 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:20.670 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.670 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.930 Malloc0 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.931 [2024-11-20 18:09:39.267241] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e40320/0x1e4be40) succeed. 00:24:20.931 [2024-11-20 18:09:39.283344] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e419b0/0x1ecbe80) succeed. 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.931 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.191 [2024-11-20 18:09:39.472416] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3686950 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:21.191 18:09:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:23.104 18:09:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3686660 00:24:23.104 18:09:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Write completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 Read completed with error (sct=0, sc=8) 00:24:24.211 starting I/O failed 00:24:24.211 [2024-11-20 18:09:42.717669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:25.152 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3686660 Killed "${NVMF_APP[@]}" "$@" 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3687699 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3687699 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3687699 ']' 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.152 18:09:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.152 [2024-11-20 18:09:43.563580] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:24:25.152 [2024-11-20 18:09:43.563653] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.152 [2024-11-20 18:09:43.660520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:25.412 [2024-11-20 18:09:43.694101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.412 [2024-11-20 18:09:43.694134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.412 [2024-11-20 18:09:43.694139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.412 [2024-11-20 18:09:43.694144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.412 [2024-11-20 18:09:43.694149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.412 [2024-11-20 18:09:43.695726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:25.412 [2024-11-20 18:09:43.695926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:25.412 [2024-11-20 18:09:43.696200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:25.412 [2024-11-20 18:09:43.696200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:25.412 Read completed with error (sct=0, sc=8) 00:24:25.412 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Write completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 Read completed with error (sct=0, sc=8) 00:24:25.413 starting I/O failed 00:24:25.413 [2024-11-20 18:09:43.723464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.413 [2024-11-20 18:09:43.725882] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:25.413 [2024-11-20 18:09:43.725903] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:25.413 [2024-11-20 18:09:43.725918] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.984 Malloc0 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.984 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.984 [2024-11-20 18:09:44.480551] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2170320/0x217be40) succeed. 00:24:25.984 [2024-11-20 18:09:44.491682] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21719b0/0x21fbe80) succeed. 00:24:26.244 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.244 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.244 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.244 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.244 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.244 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.244 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.244 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.245 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.245 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:26.245 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.245 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.245 [2024-11-20 18:09:44.632090] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:26.245 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.245 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:26.245 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.245 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.245 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.245 18:09:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3686950 00:24:26.245 [2024-11-20 18:09:44.730282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.245 qpair failed and we were unable to recover it. 00:24:26.245 [2024-11-20 18:09:44.742856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.245 [2024-11-20 18:09:44.742906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.245 [2024-11-20 18:09:44.742938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.245 [2024-11-20 18:09:44.742945] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.245 [2024-11-20 18:09:44.742950] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.245 [2024-11-20 18:09:44.752087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.245 qpair failed and we were unable to recover it. 00:24:26.245 [2024-11-20 18:09:44.762893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.245 [2024-11-20 18:09:44.762926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.245 [2024-11-20 18:09:44.762939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.245 [2024-11-20 18:09:44.762945] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.245 [2024-11-20 18:09:44.762950] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.245 [2024-11-20 18:09:44.771757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.245 qpair failed and we were unable to recover it. 00:24:26.506 [2024-11-20 18:09:44.781771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.506 [2024-11-20 18:09:44.781810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.506 [2024-11-20 18:09:44.781822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.506 [2024-11-20 18:09:44.781828] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.506 [2024-11-20 18:09:44.781833] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.506 [2024-11-20 18:09:44.791952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.506 qpair failed and we were unable to recover it. 00:24:26.506 [2024-11-20 18:09:44.801869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.506 [2024-11-20 18:09:44.801905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.506 [2024-11-20 18:09:44.801918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.506 [2024-11-20 18:09:44.801923] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.506 [2024-11-20 18:09:44.801928] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.506 [2024-11-20 18:09:44.811842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.506 qpair failed and we were unable to recover it. 00:24:26.506 [2024-11-20 18:09:44.822710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.506 [2024-11-20 18:09:44.822742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.506 [2024-11-20 18:09:44.822754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.506 [2024-11-20 18:09:44.822759] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.506 [2024-11-20 18:09:44.822763] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.506 [2024-11-20 18:09:44.832006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.507 [2024-11-20 18:09:44.842480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.507 [2024-11-20 18:09:44.842510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.507 [2024-11-20 18:09:44.842522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.507 [2024-11-20 18:09:44.842527] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.507 [2024-11-20 18:09:44.842531] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.507 [2024-11-20 18:09:44.852094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.507 [2024-11-20 18:09:44.862563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.507 [2024-11-20 18:09:44.862595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.507 [2024-11-20 18:09:44.862607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.507 [2024-11-20 18:09:44.862612] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.507 [2024-11-20 18:09:44.862616] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.507 [2024-11-20 18:09:44.872050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.507 [2024-11-20 18:09:44.882371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.507 [2024-11-20 18:09:44.882402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.507 [2024-11-20 18:09:44.882414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.507 [2024-11-20 18:09:44.882422] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.507 [2024-11-20 18:09:44.882427] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.507 [2024-11-20 18:09:44.892220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.507 [2024-11-20 18:09:44.902491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.507 [2024-11-20 18:09:44.902524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.507 [2024-11-20 18:09:44.902536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.507 [2024-11-20 18:09:44.902541] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.507 [2024-11-20 18:09:44.902546] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.507 [2024-11-20 18:09:44.912205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.507 [2024-11-20 18:09:44.922747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.507 [2024-11-20 18:09:44.922779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.507 [2024-11-20 18:09:44.922791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.507 [2024-11-20 18:09:44.922796] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.507 [2024-11-20 18:09:44.922800] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.507 [2024-11-20 18:09:44.932194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.507 [2024-11-20 18:09:44.942983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.507 [2024-11-20 18:09:44.943014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.507 [2024-11-20 18:09:44.943025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.507 [2024-11-20 18:09:44.943031] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.507 [2024-11-20 18:09:44.943035] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.507 [2024-11-20 18:09:44.952656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.507 [2024-11-20 18:09:44.962454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.507 [2024-11-20 18:09:44.962485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.507 [2024-11-20 18:09:44.962497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.507 [2024-11-20 18:09:44.962502] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.507 [2024-11-20 18:09:44.962509] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.507 [2024-11-20 18:09:44.972295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.507 [2024-11-20 18:09:44.983230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.507 [2024-11-20 18:09:44.983265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.507 [2024-11-20 18:09:44.983286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.507 [2024-11-20 18:09:44.983293] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.507 [2024-11-20 18:09:44.983298] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.507 [2024-11-20 18:09:44.992457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.507 [2024-11-20 18:09:45.002963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.507 [2024-11-20 18:09:45.002997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.507 [2024-11-20 18:09:45.003018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.507 [2024-11-20 18:09:45.003024] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.507 [2024-11-20 18:09:45.003029] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.507 [2024-11-20 18:09:45.012471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.507 [2024-11-20 18:09:45.023401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.507 [2024-11-20 18:09:45.023441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.507 [2024-11-20 18:09:45.023462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.507 [2024-11-20 18:09:45.023468] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.507 [2024-11-20 18:09:45.023473] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.507 [2024-11-20 18:09:45.032586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.507 qpair failed and we were unable to recover it. 00:24:26.768 [2024-11-20 18:09:45.042628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.768 [2024-11-20 18:09:45.042659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.768 [2024-11-20 18:09:45.042672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.768 [2024-11-20 18:09:45.042677] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.768 [2024-11-20 18:09:45.042682] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.768 [2024-11-20 18:09:45.052097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.768 qpair failed and we were unable to recover it. 00:24:26.768 [2024-11-20 18:09:45.063074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.768 [2024-11-20 18:09:45.063108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.768 [2024-11-20 18:09:45.063120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.768 [2024-11-20 18:09:45.063125] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.768 [2024-11-20 18:09:45.063130] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.768 [2024-11-20 18:09:45.072793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.768 qpair failed and we were unable to recover it. 00:24:26.768 [2024-11-20 18:09:45.083130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.083160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.083171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.083176] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.083181] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.092559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-11-20 18:09:45.103191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.103230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.103242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.103247] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.103252] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.113021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-11-20 18:09:45.122844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.122878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.122889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.122894] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.122899] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.132839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-11-20 18:09:45.143612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.143646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.143661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.143667] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.143671] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.153009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-11-20 18:09:45.163356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.163389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.163401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.163406] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.163410] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.172684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-11-20 18:09:45.183335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.183377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.183389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.183394] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.183398] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.193069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-11-20 18:09:45.202961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.202992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.203003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.203008] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.203013] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.212900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-11-20 18:09:45.223700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.223733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.223744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.223756] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.223761] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.232928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-11-20 18:09:45.243850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.243883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.243895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.243899] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.243904] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.253279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-11-20 18:09:45.263785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.263828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.263848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.263854] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.263859] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.273255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-11-20 18:09:45.283190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-11-20 18:09:45.283223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-11-20 18:09:45.283236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-11-20 18:09:45.283242] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-11-20 18:09:45.283246] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.769 [2024-11-20 18:09:45.293219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:27.030 [2024-11-20 18:09:45.303998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.030 [2024-11-20 18:09:45.304034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.030 [2024-11-20 18:09:45.304046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.030 [2024-11-20 18:09:45.304052] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.030 [2024-11-20 18:09:45.304057] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.030 [2024-11-20 18:09:45.313100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.030 qpair failed and we were unable to recover it. 00:24:27.030 [2024-11-20 18:09:45.323870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.030 [2024-11-20 18:09:45.323903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.030 [2024-11-20 18:09:45.323914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.030 [2024-11-20 18:09:45.323919] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.030 [2024-11-20 18:09:45.323924] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.030 [2024-11-20 18:09:45.333625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.030 qpair failed and we were unable to recover it. 00:24:27.030 [2024-11-20 18:09:45.344025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.030 [2024-11-20 18:09:45.344058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.030 [2024-11-20 18:09:45.344070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.030 [2024-11-20 18:09:45.344075] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.030 [2024-11-20 18:09:45.344079] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.030 [2024-11-20 18:09:45.353575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.030 qpair failed and we were unable to recover it. 00:24:27.030 [2024-11-20 18:09:45.363629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.030 [2024-11-20 18:09:45.363659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.030 [2024-11-20 18:09:45.363671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.030 [2024-11-20 18:09:45.363676] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-11-20 18:09:45.363680] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.031 [2024-11-20 18:09:45.373407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-11-20 18:09:45.384182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-11-20 18:09:45.384215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-11-20 18:09:45.384226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-11-20 18:09:45.384231] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-11-20 18:09:45.384236] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.031 [2024-11-20 18:09:45.393459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-11-20 18:09:45.404019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-11-20 18:09:45.404053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-11-20 18:09:45.404064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-11-20 18:09:45.404069] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-11-20 18:09:45.404073] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.031 [2024-11-20 18:09:45.413627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-11-20 18:09:45.424064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-11-20 18:09:45.424094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-11-20 18:09:45.424105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-11-20 18:09:45.424110] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-11-20 18:09:45.424114] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.031 [2024-11-20 18:09:45.433545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-11-20 18:09:45.443712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-11-20 18:09:45.443744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-11-20 18:09:45.443756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-11-20 18:09:45.443761] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-11-20 18:09:45.443765] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.031 [2024-11-20 18:09:45.453453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-11-20 18:09:45.464321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-11-20 18:09:45.464350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-11-20 18:09:45.464361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-11-20 18:09:45.464366] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-11-20 18:09:45.464370] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.031 [2024-11-20 18:09:45.473673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-11-20 18:09:45.484415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-11-20 18:09:45.484448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-11-20 18:09:45.484462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-11-20 18:09:45.484467] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-11-20 18:09:45.484472] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.031 [2024-11-20 18:09:45.493828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-11-20 18:09:45.504420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-11-20 18:09:45.504454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-11-20 18:09:45.504465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-11-20 18:09:45.504470] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-11-20 18:09:45.504474] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.031 [2024-11-20 18:09:45.513923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-11-20 18:09:45.523873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-11-20 18:09:45.523904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-11-20 18:09:45.523916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-11-20 18:09:45.523920] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-11-20 18:09:45.523925] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.031 [2024-11-20 18:09:45.533802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-11-20 18:09:45.544585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-11-20 18:09:45.544622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-11-20 18:09:45.544633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-11-20 18:09:45.544638] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-11-20 18:09:45.544642] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.031 [2024-11-20 18:09:45.554044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.293 [2024-11-20 18:09:45.564433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.293 [2024-11-20 18:09:45.564468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.293 [2024-11-20 18:09:45.564479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.293 [2024-11-20 18:09:45.564487] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.293 [2024-11-20 18:09:45.564492] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.293 [2024-11-20 18:09:45.573891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.293 qpair failed and we were unable to recover it. 00:24:27.293 [2024-11-20 18:09:45.584769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.293 [2024-11-20 18:09:45.584805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.293 [2024-11-20 18:09:45.584816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.293 [2024-11-20 18:09:45.584821] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.293 [2024-11-20 18:09:45.584825] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.293 [2024-11-20 18:09:45.594043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.293 qpair failed and we were unable to recover it. 00:24:27.293 [2024-11-20 18:09:45.604340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.293 [2024-11-20 18:09:45.604372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.293 [2024-11-20 18:09:45.604392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.293 [2024-11-20 18:09:45.604398] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.293 [2024-11-20 18:09:45.604403] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.293 [2024-11-20 18:09:45.614091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.293 qpair failed and we were unable to recover it. 00:24:27.293 [2024-11-20 18:09:45.625035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.293 [2024-11-20 18:09:45.625068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.293 [2024-11-20 18:09:45.625089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.293 [2024-11-20 18:09:45.625095] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.293 [2024-11-20 18:09:45.625100] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.293 [2024-11-20 18:09:45.634341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.293 qpair failed and we were unable to recover it. 00:24:27.293 [2024-11-20 18:09:45.644825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.293 [2024-11-20 18:09:45.644860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.293 [2024-11-20 18:09:45.644888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.293 [2024-11-20 18:09:45.644894] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.293 [2024-11-20 18:09:45.644899] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.293 [2024-11-20 18:09:45.654430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.293 qpair failed and we were unable to recover it. 00:24:27.293 [2024-11-20 18:09:45.664906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.293 [2024-11-20 18:09:45.664940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.293 [2024-11-20 18:09:45.664953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.293 [2024-11-20 18:09:45.664958] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.293 [2024-11-20 18:09:45.664963] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.293 [2024-11-20 18:09:45.674328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.293 qpair failed and we were unable to recover it. 00:24:27.293 [2024-11-20 18:09:45.684553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-11-20 18:09:45.684585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-11-20 18:09:45.684597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-11-20 18:09:45.684602] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-11-20 18:09:45.684607] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.294 [2024-11-20 18:09:45.694332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-11-20 18:09:45.705019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-11-20 18:09:45.705052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-11-20 18:09:45.705063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-11-20 18:09:45.705068] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-11-20 18:09:45.705073] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.294 [2024-11-20 18:09:45.714428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-11-20 18:09:45.725046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-11-20 18:09:45.725075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-11-20 18:09:45.725087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-11-20 18:09:45.725092] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-11-20 18:09:45.725097] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.294 [2024-11-20 18:09:45.734476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-11-20 18:09:45.744869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-11-20 18:09:45.744896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-11-20 18:09:45.744908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-11-20 18:09:45.744913] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-11-20 18:09:45.744918] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.294 [2024-11-20 18:09:45.754378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-11-20 18:09:45.764512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-11-20 18:09:45.764543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-11-20 18:09:45.764555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-11-20 18:09:45.764560] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-11-20 18:09:45.764564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.294 [2024-11-20 18:09:45.774591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-11-20 18:09:45.785159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-11-20 18:09:45.785189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-11-20 18:09:45.785201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-11-20 18:09:45.785206] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-11-20 18:09:45.785210] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.294 [2024-11-20 18:09:45.794670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-11-20 18:09:45.805142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-11-20 18:09:45.805175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-11-20 18:09:45.805187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-11-20 18:09:45.805192] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-11-20 18:09:45.805196] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.294 [2024-11-20 18:09:45.814606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-11-20 18:09:45.825653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-11-20 18:09:45.825689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-11-20 18:09:45.825702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-11-20 18:09:45.825707] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-11-20 18:09:45.825712] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.556 [2024-11-20 18:09:45.834601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-11-20 18:09:45.844735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-11-20 18:09:45.844765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-11-20 18:09:45.844777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-11-20 18:09:45.844782] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-11-20 18:09:45.844786] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.556 [2024-11-20 18:09:45.854870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-11-20 18:09:45.865247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-11-20 18:09:45.865279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-11-20 18:09:45.865290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-11-20 18:09:45.865295] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-11-20 18:09:45.865299] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.556 [2024-11-20 18:09:45.874726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-11-20 18:09:45.885335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-11-20 18:09:45.885367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-11-20 18:09:45.885378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-11-20 18:09:45.885383] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-11-20 18:09:45.885388] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.556 [2024-11-20 18:09:45.894844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-11-20 18:09:45.905570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-11-20 18:09:45.905601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-11-20 18:09:45.905612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-11-20 18:09:45.905617] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-11-20 18:09:45.905625] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.556 [2024-11-20 18:09:45.915149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.557 [2024-11-20 18:09:45.925257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-11-20 18:09:45.925285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-11-20 18:09:45.925296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-11-20 18:09:45.925301] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-11-20 18:09:45.925305] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.557 [2024-11-20 18:09:45.934743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-11-20 18:09:45.945550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-11-20 18:09:45.945586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-11-20 18:09:45.945597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-11-20 18:09:45.945602] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-11-20 18:09:45.945606] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.557 [2024-11-20 18:09:45.955042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-11-20 18:09:45.965838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-11-20 18:09:45.965879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-11-20 18:09:45.965890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-11-20 18:09:45.965895] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-11-20 18:09:45.965900] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.557 [2024-11-20 18:09:45.975232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-11-20 18:09:45.985767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-11-20 18:09:45.985800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-11-20 18:09:45.985811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-11-20 18:09:45.985816] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-11-20 18:09:45.985820] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.557 [2024-11-20 18:09:45.995084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-11-20 18:09:46.005257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-11-20 18:09:46.005289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-11-20 18:09:46.005301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-11-20 18:09:46.005306] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-11-20 18:09:46.005310] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.557 [2024-11-20 18:09:46.015076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-11-20 18:09:46.026283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-11-20 18:09:46.026314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-11-20 18:09:46.026326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-11-20 18:09:46.026331] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-11-20 18:09:46.026335] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.557 [2024-11-20 18:09:46.035168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-11-20 18:09:46.045945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-11-20 18:09:46.045976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-11-20 18:09:46.045988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-11-20 18:09:46.045992] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-11-20 18:09:46.045997] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.557 [2024-11-20 18:09:46.055164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-11-20 18:09:46.066010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-11-20 18:09:46.066041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-11-20 18:09:46.066051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-11-20 18:09:46.066056] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-11-20 18:09:46.066061] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.557 [2024-11-20 18:09:46.075183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-11-20 18:09:46.085737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-11-20 18:09:46.085771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-11-20 18:09:46.085783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-11-20 18:09:46.085788] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-11-20 18:09:46.085792] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.095637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.106234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.106266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.106277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.106282] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.106286] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.115409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.125760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.125794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.125805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.125810] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.125815] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.135564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.146565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.146595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.146607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.146612] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.146616] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.155640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.166242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.166273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.166287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.166292] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.166296] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.175564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.185989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.186022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.186034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.186038] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.186043] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.195852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.206290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.206321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.206333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.206337] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.206342] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.215989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.226686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.226718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.226729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.226734] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.226739] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.236004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.246437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.246469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.246480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.246485] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.246492] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.255906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.265940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.265976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.265987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.265993] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.265997] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.276207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.288640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.288667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.288679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.288684] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.288688] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.296237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.306985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.307022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.307043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.307049] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.307054] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.315581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.325491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.325522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-11-20 18:09:46.325537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-11-20 18:09:46.325545] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-11-20 18:09:46.325551] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:27.819 [2024-11-20 18:09:46.335974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-11-20 18:09:46.346295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-11-20 18:09:46.346325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.820 [2024-11-20 18:09:46.346342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.820 [2024-11-20 18:09:46.346348] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.820 [2024-11-20 18:09:46.346352] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.081 [2024-11-20 18:09:46.355978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-11-20 18:09:46.366373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-11-20 18:09:46.366409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-11-20 18:09:46.366423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-11-20 18:09:46.366430] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-11-20 18:09:46.366438] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.081 [2024-11-20 18:09:46.376075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-11-20 18:09:46.386319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-11-20 18:09:46.386355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-11-20 18:09:46.386367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-11-20 18:09:46.386373] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-11-20 18:09:46.386377] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.081 [2024-11-20 18:09:46.396267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-11-20 18:09:46.406298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-11-20 18:09:46.406330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-11-20 18:09:46.406344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-11-20 18:09:46.406350] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-11-20 18:09:46.406358] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.081 [2024-11-20 18:09:46.416172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-11-20 18:09:46.426556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-11-20 18:09:46.426591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-11-20 18:09:46.426607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-11-20 18:09:46.426612] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-11-20 18:09:46.426617] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.081 [2024-11-20 18:09:46.436315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-11-20 18:09:46.446488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-11-20 18:09:46.446519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-11-20 18:09:46.446531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-11-20 18:09:46.446536] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-11-20 18:09:46.446541] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.081 [2024-11-20 18:09:46.456122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-11-20 18:09:46.465971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-11-20 18:09:46.466000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-11-20 18:09:46.466012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-11-20 18:09:46.466017] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-11-20 18:09:46.466021] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.081 [2024-11-20 18:09:46.476270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-11-20 18:09:46.485826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-11-20 18:09:46.485857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-11-20 18:09:46.485873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-11-20 18:09:46.485878] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-11-20 18:09:46.485882] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.081 [2024-11-20 18:09:46.496339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-11-20 18:09:46.506725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-11-20 18:09:46.506755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-11-20 18:09:46.506766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-11-20 18:09:46.506774] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-11-20 18:09:46.506779] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.081 [2024-11-20 18:09:46.516378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-11-20 18:09:46.526727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-11-20 18:09:46.526766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.082 [2024-11-20 18:09:46.526780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.082 [2024-11-20 18:09:46.526785] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.082 [2024-11-20 18:09:46.526789] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.082 [2024-11-20 18:09:46.536470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.082 qpair failed and we were unable to recover it. 00:24:28.082 [2024-11-20 18:09:46.546763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.082 [2024-11-20 18:09:46.546793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.082 [2024-11-20 18:09:46.546805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.082 [2024-11-20 18:09:46.546810] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.082 [2024-11-20 18:09:46.546814] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.082 [2024-11-20 18:09:46.556296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.082 qpair failed and we were unable to recover it. 00:24:28.082 [2024-11-20 18:09:46.566177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.082 [2024-11-20 18:09:46.566210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.082 [2024-11-20 18:09:46.566224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.082 [2024-11-20 18:09:46.566232] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.082 [2024-11-20 18:09:46.566237] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.082 [2024-11-20 18:09:46.576535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.082 qpair failed and we were unable to recover it. 00:24:28.082 [2024-11-20 18:09:46.586627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.082 [2024-11-20 18:09:46.586664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.082 [2024-11-20 18:09:46.586677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.082 [2024-11-20 18:09:46.586684] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.082 [2024-11-20 18:09:46.586692] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.082 [2024-11-20 18:09:46.596555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.082 qpair failed and we were unable to recover it. 00:24:28.082 [2024-11-20 18:09:46.606975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.082 [2024-11-20 18:09:46.607006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.082 [2024-11-20 18:09:46.607018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.082 [2024-11-20 18:09:46.607023] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.082 [2024-11-20 18:09:46.607028] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.343 [2024-11-20 18:09:46.616911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-11-20 18:09:46.626963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.343 [2024-11-20 18:09:46.627003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.343 [2024-11-20 18:09:46.627017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.343 [2024-11-20 18:09:46.627022] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.343 [2024-11-20 18:09:46.627026] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.343 [2024-11-20 18:09:46.636781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-11-20 18:09:46.646912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.343 [2024-11-20 18:09:46.646942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.343 [2024-11-20 18:09:46.646955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.343 [2024-11-20 18:09:46.646960] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.343 [2024-11-20 18:09:46.646964] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.343 [2024-11-20 18:09:46.656811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-11-20 18:09:46.667217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.343 [2024-11-20 18:09:46.667248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.343 [2024-11-20 18:09:46.667262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.343 [2024-11-20 18:09:46.667269] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.343 [2024-11-20 18:09:46.667273] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.343 [2024-11-20 18:09:46.677027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-11-20 18:09:46.687282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.343 [2024-11-20 18:09:46.687318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.343 [2024-11-20 18:09:46.687333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.343 [2024-11-20 18:09:46.687338] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.343 [2024-11-20 18:09:46.687342] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.343 [2024-11-20 18:09:46.697273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-11-20 18:09:46.707261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.343 [2024-11-20 18:09:46.707293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.343 [2024-11-20 18:09:46.707308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.343 [2024-11-20 18:09:46.707313] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.343 [2024-11-20 18:09:46.707318] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.343 [2024-11-20 18:09:46.717267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-11-20 18:09:46.726945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.343 [2024-11-20 18:09:46.726976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.343 [2024-11-20 18:09:46.726988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.343 [2024-11-20 18:09:46.726993] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.343 [2024-11-20 18:09:46.726997] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.343 [2024-11-20 18:09:46.736820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-11-20 18:09:46.747448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.343 [2024-11-20 18:09:46.747483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.343 [2024-11-20 18:09:46.747497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.343 [2024-11-20 18:09:46.747504] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.343 [2024-11-20 18:09:46.747509] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.343 [2024-11-20 18:09:46.757036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-11-20 18:09:46.767528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.344 [2024-11-20 18:09:46.767567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.344 [2024-11-20 18:09:46.767584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.344 [2024-11-20 18:09:46.767590] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.344 [2024-11-20 18:09:46.767594] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.344 [2024-11-20 18:09:46.777404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-11-20 18:09:46.787494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.344 [2024-11-20 18:09:46.787524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.344 [2024-11-20 18:09:46.787539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.344 [2024-11-20 18:09:46.787544] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.344 [2024-11-20 18:09:46.787549] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.344 [2024-11-20 18:09:46.797308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-11-20 18:09:46.807257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.344 [2024-11-20 18:09:46.807291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.344 [2024-11-20 18:09:46.807305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.344 [2024-11-20 18:09:46.807313] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.344 [2024-11-20 18:09:46.807318] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.344 [2024-11-20 18:09:46.817387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-11-20 18:09:46.827535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.344 [2024-11-20 18:09:46.827568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.344 [2024-11-20 18:09:46.827582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.344 [2024-11-20 18:09:46.827589] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.344 [2024-11-20 18:09:46.827594] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.344 [2024-11-20 18:09:46.837623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-11-20 18:09:46.847559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.344 [2024-11-20 18:09:46.847594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.344 [2024-11-20 18:09:46.847607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.344 [2024-11-20 18:09:46.847618] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.344 [2024-11-20 18:09:46.847622] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.344 [2024-11-20 18:09:46.857544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-11-20 18:09:46.867907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.344 [2024-11-20 18:09:46.867947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.344 [2024-11-20 18:09:46.867968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.344 [2024-11-20 18:09:46.867974] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.344 [2024-11-20 18:09:46.867979] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.606 [2024-11-20 18:09:46.877457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.606 qpair failed and we were unable to recover it. 00:24:28.606 [2024-11-20 18:09:46.887473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.606 [2024-11-20 18:09:46.887504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.606 [2024-11-20 18:09:46.887520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.606 [2024-11-20 18:09:46.887527] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.606 [2024-11-20 18:09:46.887535] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.606 [2024-11-20 18:09:46.897450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.606 qpair failed and we were unable to recover it. 00:24:28.606 [2024-11-20 18:09:46.907521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.606 [2024-11-20 18:09:46.907556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.606 [2024-11-20 18:09:46.907569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.606 [2024-11-20 18:09:46.907574] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.606 [2024-11-20 18:09:46.907578] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.606 [2024-11-20 18:09:46.917371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.606 qpair failed and we were unable to recover it. 00:24:28.606 [2024-11-20 18:09:46.928032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.606 [2024-11-20 18:09:46.928066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.606 [2024-11-20 18:09:46.928089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.606 [2024-11-20 18:09:46.928095] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.606 [2024-11-20 18:09:46.928100] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.606 [2024-11-20 18:09:46.937796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.606 qpair failed and we were unable to recover it. 00:24:28.606 [2024-11-20 18:09:46.948799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.606 [2024-11-20 18:09:46.948832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.606 [2024-11-20 18:09:46.948846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.606 [2024-11-20 18:09:46.948851] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.606 [2024-11-20 18:09:46.948856] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.606 [2024-11-20 18:09:46.957709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.606 qpair failed and we were unable to recover it. 00:24:28.606 [2024-11-20 18:09:46.967738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.606 [2024-11-20 18:09:46.967768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.606 [2024-11-20 18:09:46.967783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.606 [2024-11-20 18:09:46.967789] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.606 [2024-11-20 18:09:46.967793] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.606 [2024-11-20 18:09:46.977771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.606 qpair failed and we were unable to recover it. 00:24:28.606 [2024-11-20 18:09:46.988034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.606 [2024-11-20 18:09:46.988071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.606 [2024-11-20 18:09:46.988087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.606 [2024-11-20 18:09:46.988092] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.606 [2024-11-20 18:09:46.988097] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.607 [2024-11-20 18:09:46.998019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.607 qpair failed and we were unable to recover it. 00:24:28.607 [2024-11-20 18:09:47.008207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.607 [2024-11-20 18:09:47.008237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.607 [2024-11-20 18:09:47.008252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.607 [2024-11-20 18:09:47.008260] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.607 [2024-11-20 18:09:47.008264] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.607 [2024-11-20 18:09:47.017881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.607 qpair failed and we were unable to recover it. 00:24:28.607 [2024-11-20 18:09:47.028268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.607 [2024-11-20 18:09:47.028304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.607 [2024-11-20 18:09:47.028319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.607 [2024-11-20 18:09:47.028324] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.607 [2024-11-20 18:09:47.028329] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.607 [2024-11-20 18:09:47.038057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.607 qpair failed and we were unable to recover it. 00:24:28.607 [2024-11-20 18:09:47.048004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.607 [2024-11-20 18:09:47.048034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.607 [2024-11-20 18:09:47.048048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.607 [2024-11-20 18:09:47.048056] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.607 [2024-11-20 18:09:47.048063] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.607 [2024-11-20 18:09:47.058047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.607 qpair failed and we were unable to recover it. 00:24:28.607 [2024-11-20 18:09:47.068347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.607 [2024-11-20 18:09:47.068381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.607 [2024-11-20 18:09:47.068392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.607 [2024-11-20 18:09:47.068399] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.607 [2024-11-20 18:09:47.068406] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.607 [2024-11-20 18:09:47.078210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.607 qpair failed and we were unable to recover it. 00:24:28.607 [2024-11-20 18:09:47.088270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.607 [2024-11-20 18:09:47.088303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.607 [2024-11-20 18:09:47.088317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.607 [2024-11-20 18:09:47.088322] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.607 [2024-11-20 18:09:47.088326] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.607 [2024-11-20 18:09:47.098399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.607 qpair failed and we were unable to recover it. 00:24:28.607 [2024-11-20 18:09:47.108349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.607 [2024-11-20 18:09:47.108385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.607 [2024-11-20 18:09:47.108404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.607 [2024-11-20 18:09:47.108410] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.607 [2024-11-20 18:09:47.108414] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.607 [2024-11-20 18:09:47.118175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.607 qpair failed and we were unable to recover it. 00:24:28.607 [2024-11-20 18:09:47.128217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.607 [2024-11-20 18:09:47.128249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.607 [2024-11-20 18:09:47.128263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.607 [2024-11-20 18:09:47.128272] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.607 [2024-11-20 18:09:47.128277] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.607 [2024-11-20 18:09:47.138350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.607 qpair failed and we were unable to recover it. 00:24:28.869 [2024-11-20 18:09:47.148476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.869 [2024-11-20 18:09:47.148509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.869 [2024-11-20 18:09:47.148523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.869 [2024-11-20 18:09:47.148528] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.869 [2024-11-20 18:09:47.148532] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.869 [2024-11-20 18:09:47.158217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.869 qpair failed and we were unable to recover it. 00:24:28.869 [2024-11-20 18:09:47.168421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.869 [2024-11-20 18:09:47.168456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.869 [2024-11-20 18:09:47.168468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.869 [2024-11-20 18:09:47.168473] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.869 [2024-11-20 18:09:47.168477] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.869 [2024-11-20 18:09:47.178327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.869 qpair failed and we were unable to recover it. 00:24:28.869 [2024-11-20 18:09:47.188653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.869 [2024-11-20 18:09:47.188680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.869 [2024-11-20 18:09:47.188694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.869 [2024-11-20 18:09:47.188705] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.869 [2024-11-20 18:09:47.188712] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.869 [2024-11-20 18:09:47.198423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.869 qpair failed and we were unable to recover it. 00:24:28.869 [2024-11-20 18:09:47.208295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.869 [2024-11-20 18:09:47.208327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.869 [2024-11-20 18:09:47.208338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.869 [2024-11-20 18:09:47.208343] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.869 [2024-11-20 18:09:47.208347] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.869 [2024-11-20 18:09:47.218613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.869 qpair failed and we were unable to recover it. 00:24:28.869 [2024-11-20 18:09:47.229276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.869 [2024-11-20 18:09:47.229307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.869 [2024-11-20 18:09:47.229319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.869 [2024-11-20 18:09:47.229324] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.869 [2024-11-20 18:09:47.229329] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.869 [2024-11-20 18:09:47.238559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.869 qpair failed and we were unable to recover it. 00:24:28.869 [2024-11-20 18:09:47.248713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.869 [2024-11-20 18:09:47.248745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.869 [2024-11-20 18:09:47.248757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.870 [2024-11-20 18:09:47.248762] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.870 [2024-11-20 18:09:47.248767] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.870 [2024-11-20 18:09:47.258663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.870 qpair failed and we were unable to recover it. 00:24:28.870 [2024-11-20 18:09:47.269468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.870 [2024-11-20 18:09:47.269502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.870 [2024-11-20 18:09:47.269514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.870 [2024-11-20 18:09:47.269519] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.870 [2024-11-20 18:09:47.269524] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.870 [2024-11-20 18:09:47.278570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.870 qpair failed and we were unable to recover it. 00:24:28.870 [2024-11-20 18:09:47.288622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.870 [2024-11-20 18:09:47.288652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.870 [2024-11-20 18:09:47.288665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.870 [2024-11-20 18:09:47.288670] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.870 [2024-11-20 18:09:47.288674] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.870 [2024-11-20 18:09:47.298929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.870 qpair failed and we were unable to recover it. 00:24:28.870 [2024-11-20 18:09:47.309540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.870 [2024-11-20 18:09:47.309573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.870 [2024-11-20 18:09:47.309584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.870 [2024-11-20 18:09:47.309589] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.870 [2024-11-20 18:09:47.309594] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.870 [2024-11-20 18:09:47.318753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.870 qpair failed and we were unable to recover it. 00:24:28.870 [2024-11-20 18:09:47.329494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.870 [2024-11-20 18:09:47.329520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.870 [2024-11-20 18:09:47.329532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.870 [2024-11-20 18:09:47.329537] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.870 [2024-11-20 18:09:47.329542] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.870 [2024-11-20 18:09:47.338877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.870 qpair failed and we were unable to recover it. 00:24:28.870 [2024-11-20 18:09:47.349491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.870 [2024-11-20 18:09:47.349526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.870 [2024-11-20 18:09:47.349538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.870 [2024-11-20 18:09:47.349543] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.870 [2024-11-20 18:09:47.349547] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.870 [2024-11-20 18:09:47.358857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.870 qpair failed and we were unable to recover it. 00:24:28.870 [2024-11-20 18:09:47.368752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.870 [2024-11-20 18:09:47.368784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.870 [2024-11-20 18:09:47.368795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.870 [2024-11-20 18:09:47.368800] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.870 [2024-11-20 18:09:47.368805] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.870 [2024-11-20 18:09:47.378920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.870 qpair failed and we were unable to recover it. 00:24:28.870 [2024-11-20 18:09:47.389165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.870 [2024-11-20 18:09:47.389197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.870 [2024-11-20 18:09:47.389209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.870 [2024-11-20 18:09:47.389214] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.870 [2024-11-20 18:09:47.389219] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:28.870 [2024-11-20 18:09:47.398838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.870 qpair failed and we were unable to recover it. 00:24:29.132 [2024-11-20 18:09:47.409234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.132 [2024-11-20 18:09:47.409272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.132 [2024-11-20 18:09:47.409283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.132 [2024-11-20 18:09:47.409288] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.132 [2024-11-20 18:09:47.409293] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.132 [2024-11-20 18:09:47.419044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.132 qpair failed and we were unable to recover it. 00:24:29.132 [2024-11-20 18:09:47.430051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.132 [2024-11-20 18:09:47.430084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.132 [2024-11-20 18:09:47.430105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.132 [2024-11-20 18:09:47.430111] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.132 [2024-11-20 18:09:47.430116] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.132 [2024-11-20 18:09:47.439196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.132 qpair failed and we were unable to recover it. 00:24:29.132 [2024-11-20 18:09:47.449001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.132 [2024-11-20 18:09:47.449031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.132 [2024-11-20 18:09:47.449049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.132 [2024-11-20 18:09:47.449055] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.132 [2024-11-20 18:09:47.449059] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.132 [2024-11-20 18:09:47.459102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.132 qpair failed and we were unable to recover it. 00:24:29.132 [2024-11-20 18:09:47.469135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.132 [2024-11-20 18:09:47.469166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.132 [2024-11-20 18:09:47.469179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.132 [2024-11-20 18:09:47.469185] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.132 [2024-11-20 18:09:47.469189] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.132 [2024-11-20 18:09:47.479152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.132 qpair failed and we were unable to recover it. 00:24:29.132 [2024-11-20 18:09:47.489391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.132 [2024-11-20 18:09:47.489424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.132 [2024-11-20 18:09:47.489436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.132 [2024-11-20 18:09:47.489441] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.132 [2024-11-20 18:09:47.489445] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.132 [2024-11-20 18:09:47.499321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.132 qpair failed and we were unable to recover it. 00:24:29.132 [2024-11-20 18:09:47.509487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.132 [2024-11-20 18:09:47.509514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.132 [2024-11-20 18:09:47.509525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.132 [2024-11-20 18:09:47.509530] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.132 [2024-11-20 18:09:47.509535] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.132 [2024-11-20 18:09:47.519265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.132 qpair failed and we were unable to recover it. 00:24:29.132 [2024-11-20 18:09:47.529220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.132 [2024-11-20 18:09:47.529250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.132 [2024-11-20 18:09:47.529262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.132 [2024-11-20 18:09:47.529267] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.132 [2024-11-20 18:09:47.529275] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.132 [2024-11-20 18:09:47.539438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.132 qpair failed and we were unable to recover it. 00:24:29.132 [2024-11-20 18:09:47.550328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.132 [2024-11-20 18:09:47.550366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.132 [2024-11-20 18:09:47.550386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.132 [2024-11-20 18:09:47.550392] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.132 [2024-11-20 18:09:47.550397] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.133 [2024-11-20 18:09:47.559367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.133 qpair failed and we were unable to recover it. 00:24:29.133 [2024-11-20 18:09:47.569481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.133 [2024-11-20 18:09:47.569512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.133 [2024-11-20 18:09:47.569526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.133 [2024-11-20 18:09:47.569531] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.133 [2024-11-20 18:09:47.569536] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.133 [2024-11-20 18:09:47.579503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.133 qpair failed and we were unable to recover it. 00:24:29.133 [2024-11-20 18:09:47.589847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.133 [2024-11-20 18:09:47.589885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.133 [2024-11-20 18:09:47.589907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.133 [2024-11-20 18:09:47.589913] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.133 [2024-11-20 18:09:47.589918] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.133 [2024-11-20 18:09:47.599571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.133 qpair failed and we were unable to recover it. 00:24:29.133 [2024-11-20 18:09:47.609321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.133 [2024-11-20 18:09:47.609350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.133 [2024-11-20 18:09:47.609364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.133 [2024-11-20 18:09:47.609370] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.133 [2024-11-20 18:09:47.609374] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.133 [2024-11-20 18:09:47.619577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.133 qpair failed and we were unable to recover it. 00:24:29.133 [2024-11-20 18:09:47.630032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.133 [2024-11-20 18:09:47.630067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.133 [2024-11-20 18:09:47.630088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.133 [2024-11-20 18:09:47.630094] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.133 [2024-11-20 18:09:47.630098] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.133 [2024-11-20 18:09:47.639819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.133 qpair failed and we were unable to recover it. 00:24:29.133 [2024-11-20 18:09:47.649971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.133 [2024-11-20 18:09:47.650006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.133 [2024-11-20 18:09:47.650019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.133 [2024-11-20 18:09:47.650025] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.133 [2024-11-20 18:09:47.650030] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.133 [2024-11-20 18:09:47.659747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.133 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.670380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.396 [2024-11-20 18:09:47.670416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.396 [2024-11-20 18:09:47.670428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.396 [2024-11-20 18:09:47.670433] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.396 [2024-11-20 18:09:47.670438] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.396 [2024-11-20 18:09:47.679904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.690370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.396 [2024-11-20 18:09:47.690401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.396 [2024-11-20 18:09:47.690413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.396 [2024-11-20 18:09:47.690418] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.396 [2024-11-20 18:09:47.690423] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.396 [2024-11-20 18:09:47.699935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.710856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.396 [2024-11-20 18:09:47.710890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.396 [2024-11-20 18:09:47.710902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.396 [2024-11-20 18:09:47.710907] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.396 [2024-11-20 18:09:47.710912] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.396 [2024-11-20 18:09:47.719964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.730077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.396 [2024-11-20 18:09:47.730106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.396 [2024-11-20 18:09:47.730118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.396 [2024-11-20 18:09:47.730123] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.396 [2024-11-20 18:09:47.730128] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.396 [2024-11-20 18:09:47.739973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.750901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.396 [2024-11-20 18:09:47.750938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.396 [2024-11-20 18:09:47.750949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.396 [2024-11-20 18:09:47.750954] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.396 [2024-11-20 18:09:47.750959] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.396 [2024-11-20 18:09:47.760073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.769951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.396 [2024-11-20 18:09:47.769982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.396 [2024-11-20 18:09:47.769993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.396 [2024-11-20 18:09:47.769998] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.396 [2024-11-20 18:09:47.770002] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.396 [2024-11-20 18:09:47.780111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.790358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.396 [2024-11-20 18:09:47.790387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.396 [2024-11-20 18:09:47.790401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.396 [2024-11-20 18:09:47.790406] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.396 [2024-11-20 18:09:47.790411] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.396 [2024-11-20 18:09:47.799681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.809840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.396 [2024-11-20 18:09:47.809879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.396 [2024-11-20 18:09:47.809900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.396 [2024-11-20 18:09:47.809906] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.396 [2024-11-20 18:09:47.809911] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.396 [2024-11-20 18:09:47.820408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.831133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.396 [2024-11-20 18:09:47.831165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.396 [2024-11-20 18:09:47.831178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.396 [2024-11-20 18:09:47.831183] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.396 [2024-11-20 18:09:47.831188] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.396 [2024-11-20 18:09:47.840316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.850747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.396 [2024-11-20 18:09:47.850781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.396 [2024-11-20 18:09:47.850794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.396 [2024-11-20 18:09:47.850799] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.396 [2024-11-20 18:09:47.850803] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.396 [2024-11-20 18:09:47.860380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-11-20 18:09:47.871274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.397 [2024-11-20 18:09:47.871305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.397 [2024-11-20 18:09:47.871318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.397 [2024-11-20 18:09:47.871322] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.397 [2024-11-20 18:09:47.871330] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.397 [2024-11-20 18:09:47.880112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-11-20 18:09:47.891306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.397 [2024-11-20 18:09:47.891335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.397 [2024-11-20 18:09:47.891347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.397 [2024-11-20 18:09:47.891352] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.397 [2024-11-20 18:09:47.891356] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.397 [2024-11-20 18:09:47.900309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-11-20 18:09:47.910656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.397 [2024-11-20 18:09:47.910688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.397 [2024-11-20 18:09:47.910709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.397 [2024-11-20 18:09:47.910715] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.397 [2024-11-20 18:09:47.910720] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.397 [2024-11-20 18:09:47.920311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.659 [2024-11-20 18:09:47.930284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.659 [2024-11-20 18:09:47.930316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.659 [2024-11-20 18:09:47.930330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.659 [2024-11-20 18:09:47.930335] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.659 [2024-11-20 18:09:47.930340] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.659 [2024-11-20 18:09:47.940579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.659 qpair failed and we were unable to recover it. 00:24:29.659 [2024-11-20 18:09:47.951440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.659 [2024-11-20 18:09:47.951478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.659 [2024-11-20 18:09:47.951492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.659 [2024-11-20 18:09:47.951497] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.659 [2024-11-20 18:09:47.951502] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.659 [2024-11-20 18:09:47.960575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.659 qpair failed and we were unable to recover it. 00:24:29.659 [2024-11-20 18:09:47.971447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.659 [2024-11-20 18:09:47.971480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.659 [2024-11-20 18:09:47.971493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.659 [2024-11-20 18:09:47.971498] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.659 [2024-11-20 18:09:47.971502] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.659 [2024-11-20 18:09:47.980465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.659 qpair failed and we were unable to recover it. 00:24:29.659 [2024-11-20 18:09:47.991578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.659 [2024-11-20 18:09:47.991611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:47.991623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:47.991628] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:47.991633] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.660 [2024-11-20 18:09:48.000562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.660 qpair failed and we were unable to recover it. 00:24:29.660 [2024-11-20 18:09:48.010674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.660 [2024-11-20 18:09:48.010708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:48.010720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:48.010725] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:48.010729] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.660 [2024-11-20 18:09:48.020791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.660 qpair failed and we were unable to recover it. 00:24:29.660 [2024-11-20 18:09:48.031467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.660 [2024-11-20 18:09:48.031500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:48.031511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:48.031516] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:48.031520] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.660 [2024-11-20 18:09:48.040851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.660 qpair failed and we were unable to recover it. 00:24:29.660 [2024-11-20 18:09:48.050828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.660 [2024-11-20 18:09:48.050860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:48.050876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:48.050881] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:48.050886] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.660 [2024-11-20 18:09:48.060915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.660 qpair failed and we were unable to recover it. 00:24:29.660 [2024-11-20 18:09:48.071033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.660 [2024-11-20 18:09:48.071067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:48.071078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:48.071084] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:48.071088] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.660 [2024-11-20 18:09:48.080894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.660 qpair failed and we were unable to recover it. 00:24:29.660 [2024-11-20 18:09:48.091587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.660 [2024-11-20 18:09:48.091619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:48.091631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:48.091636] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:48.091641] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.660 [2024-11-20 18:09:48.100970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.660 qpair failed and we were unable to recover it. 00:24:29.660 [2024-11-20 18:09:48.111139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.660 [2024-11-20 18:09:48.111170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:48.111183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:48.111188] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:48.111193] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.660 [2024-11-20 18:09:48.120989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.660 qpair failed and we were unable to recover it. 00:24:29.660 [2024-11-20 18:09:48.130567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.660 [2024-11-20 18:09:48.130603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:48.130619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:48.130623] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:48.130628] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.660 [2024-11-20 18:09:48.140901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.660 qpair failed and we were unable to recover it. 00:24:29.660 [2024-11-20 18:09:48.151817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.660 [2024-11-20 18:09:48.151850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:48.151868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:48.151873] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:48.151878] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.660 [2024-11-20 18:09:48.161140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.660 qpair failed and we were unable to recover it. 00:24:29.660 [2024-11-20 18:09:48.171548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.660 [2024-11-20 18:09:48.171580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:48.171591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:48.171596] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:48.171601] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.660 [2024-11-20 18:09:48.181127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.660 qpair failed and we were unable to recover it. 00:24:29.660 [2024-11-20 18:09:48.192088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.660 [2024-11-20 18:09:48.192123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.660 [2024-11-20 18:09:48.192135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.660 [2024-11-20 18:09:48.192140] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.660 [2024-11-20 18:09:48.192144] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.922 [2024-11-20 18:09:48.201080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.922 qpair failed and we were unable to recover it. 00:24:29.922 [2024-11-20 18:09:48.211481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.922 [2024-11-20 18:09:48.211511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.922 [2024-11-20 18:09:48.211524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.922 [2024-11-20 18:09:48.211528] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.922 [2024-11-20 18:09:48.211536] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.922 [2024-11-20 18:09:48.221300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.922 qpair failed and we were unable to recover it. 00:24:29.922 [2024-11-20 18:09:48.232226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.922 [2024-11-20 18:09:48.232256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.922 [2024-11-20 18:09:48.232267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.922 [2024-11-20 18:09:48.232272] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.922 [2024-11-20 18:09:48.232277] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.922 [2024-11-20 18:09:48.241336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.922 qpair failed and we were unable to recover it. 00:24:29.922 [2024-11-20 18:09:48.252001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.922 [2024-11-20 18:09:48.252033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.922 [2024-11-20 18:09:48.252045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.922 [2024-11-20 18:09:48.252050] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.922 [2024-11-20 18:09:48.252055] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.922 [2024-11-20 18:09:48.261280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.922 qpair failed and we were unable to recover it. 00:24:29.922 [2024-11-20 18:09:48.272169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.922 [2024-11-20 18:09:48.272204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.922 [2024-11-20 18:09:48.272216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.922 [2024-11-20 18:09:48.272221] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.922 [2024-11-20 18:09:48.272225] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.922 [2024-11-20 18:09:48.281513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.922 qpair failed and we were unable to recover it. 00:24:29.922 [2024-11-20 18:09:48.292262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.922 [2024-11-20 18:09:48.292293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.922 [2024-11-20 18:09:48.292305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.922 [2024-11-20 18:09:48.292309] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.922 [2024-11-20 18:09:48.292314] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.922 [2024-11-20 18:09:48.301339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.922 qpair failed and we were unable to recover it. 00:24:29.922 [2024-11-20 18:09:48.311835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.922 [2024-11-20 18:09:48.311876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.922 [2024-11-20 18:09:48.311888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.922 [2024-11-20 18:09:48.311893] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.922 [2024-11-20 18:09:48.311897] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.922 [2024-11-20 18:09:48.321404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.922 qpair failed and we were unable to recover it. 00:24:29.922 [2024-11-20 18:09:48.331485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.922 [2024-11-20 18:09:48.331515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.922 [2024-11-20 18:09:48.331527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.922 [2024-11-20 18:09:48.331532] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.922 [2024-11-20 18:09:48.331536] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.922 [2024-11-20 18:09:48.341599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.922 qpair failed and we were unable to recover it. 00:24:29.922 [2024-11-20 18:09:48.351726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.922 [2024-11-20 18:09:48.351762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.922 [2024-11-20 18:09:48.351775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.922 [2024-11-20 18:09:48.351780] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.922 [2024-11-20 18:09:48.351784] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.923 [2024-11-20 18:09:48.361577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.923 qpair failed and we were unable to recover it. 00:24:29.923 [2024-11-20 18:09:48.371474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.923 [2024-11-20 18:09:48.371505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.923 [2024-11-20 18:09:48.371516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.923 [2024-11-20 18:09:48.371521] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.923 [2024-11-20 18:09:48.371526] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.923 [2024-11-20 18:09:48.381914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.923 qpair failed and we were unable to recover it. 00:24:29.923 [2024-11-20 18:09:48.392472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.923 [2024-11-20 18:09:48.392509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.923 [2024-11-20 18:09:48.392520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.923 [2024-11-20 18:09:48.392526] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.923 [2024-11-20 18:09:48.392530] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.923 [2024-11-20 18:09:48.402052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.923 qpair failed and we were unable to recover it. 00:24:29.923 [2024-11-20 18:09:48.412225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.923 [2024-11-20 18:09:48.412256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.923 [2024-11-20 18:09:48.412268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.923 [2024-11-20 18:09:48.412273] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.923 [2024-11-20 18:09:48.412277] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.923 [2024-11-20 18:09:48.421748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.923 qpair failed and we were unable to recover it. 00:24:29.923 [2024-11-20 18:09:48.432423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.923 [2024-11-20 18:09:48.432455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.923 [2024-11-20 18:09:48.432466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.923 [2024-11-20 18:09:48.432471] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.923 [2024-11-20 18:09:48.432475] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.923 [2024-11-20 18:09:48.442066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.923 qpair failed and we were unable to recover it. 00:24:29.923 [2024-11-20 18:09:48.452598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.923 [2024-11-20 18:09:48.452631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.923 [2024-11-20 18:09:48.452643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.923 [2024-11-20 18:09:48.452648] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.923 [2024-11-20 18:09:48.452653] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.184 [2024-11-20 18:09:48.461910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.184 qpair failed and we were unable to recover it. 00:24:30.184 [2024-11-20 18:09:48.472851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.184 [2024-11-20 18:09:48.472885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.184 [2024-11-20 18:09:48.472897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.184 [2024-11-20 18:09:48.472905] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.184 [2024-11-20 18:09:48.472909] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.184 [2024-11-20 18:09:48.482089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.184 qpair failed and we were unable to recover it. 00:24:30.184 [2024-11-20 18:09:48.491851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.184 [2024-11-20 18:09:48.491888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.184 [2024-11-20 18:09:48.491899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.184 [2024-11-20 18:09:48.491904] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.184 [2024-11-20 18:09:48.491909] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.184 [2024-11-20 18:09:48.501983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.512283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.512315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.512326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.512331] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.512336] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.185 [2024-11-20 18:09:48.522123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.532290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.532326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.532341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.532345] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.532350] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.185 [2024-11-20 18:09:48.542230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.552369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.552408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.552420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.552425] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.552429] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.185 [2024-11-20 18:09:48.562003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.572098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.572126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.572138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.572143] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.572147] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.185 [2024-11-20 18:09:48.582211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.592671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.592706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.592720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.592727] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.592732] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.185 [2024-11-20 18:09:48.602334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.612578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.612610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.612624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.612632] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.612636] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.185 [2024-11-20 18:09:48.622268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.632640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.632670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.632684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.632692] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.632697] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.185 [2024-11-20 18:09:48.642596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.652896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.652926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.652940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.652948] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.652954] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.185 [2024-11-20 18:09:48.662401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.672842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.672882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.672897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.672902] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.672907] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.185 [2024-11-20 18:09:48.682513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.692749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.692786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.692799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.692804] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.692808] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.185 [2024-11-20 18:09:48.702902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-20 18:09:48.712794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-20 18:09:48.712825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-20 18:09:48.712838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-20 18:09:48.712843] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-20 18:09:48.712847] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.447 [2024-11-20 18:09:48.722785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.447 qpair failed and we were unable to recover it. 00:24:30.447 [2024-11-20 18:09:48.732676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.447 [2024-11-20 18:09:48.732708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.447 [2024-11-20 18:09:48.732726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.447 [2024-11-20 18:09:48.732733] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.447 [2024-11-20 18:09:48.732738] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.447 [2024-11-20 18:09:48.742761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.447 qpair failed and we were unable to recover it. 00:24:30.447 [2024-11-20 18:09:48.752921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.447 [2024-11-20 18:09:48.752954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.447 [2024-11-20 18:09:48.752966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.447 [2024-11-20 18:09:48.752971] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.447 [2024-11-20 18:09:48.752976] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.447 [2024-11-20 18:09:48.762906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.447 qpair failed and we were unable to recover it. 00:24:30.447 [2024-11-20 18:09:48.773027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.447 [2024-11-20 18:09:48.773057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.447 [2024-11-20 18:09:48.773069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.447 [2024-11-20 18:09:48.773075] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.447 [2024-11-20 18:09:48.773079] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.447 [2024-11-20 18:09:48.782905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.447 qpair failed and we were unable to recover it. 00:24:30.447 [2024-11-20 18:09:48.793118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.447 [2024-11-20 18:09:48.793150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.447 [2024-11-20 18:09:48.793164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.447 [2024-11-20 18:09:48.793172] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.447 [2024-11-20 18:09:48.793177] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.447 [2024-11-20 18:09:48.802913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.447 qpair failed and we were unable to recover it. 00:24:30.448 [2024-11-20 18:09:48.812959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.448 [2024-11-20 18:09:48.812992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.448 [2024-11-20 18:09:48.813006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.448 [2024-11-20 18:09:48.813016] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.448 [2024-11-20 18:09:48.813020] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.448 [2024-11-20 18:09:48.822855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.448 qpair failed and we were unable to recover it. 00:24:30.448 [2024-11-20 18:09:48.833268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.448 [2024-11-20 18:09:48.833297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.448 [2024-11-20 18:09:48.833311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.448 [2024-11-20 18:09:48.833318] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.448 [2024-11-20 18:09:48.833326] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.448 [2024-11-20 18:09:48.843021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.448 qpair failed and we were unable to recover it. 00:24:30.448 [2024-11-20 18:09:48.853271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.448 [2024-11-20 18:09:48.853303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.448 [2024-11-20 18:09:48.853319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.448 [2024-11-20 18:09:48.853324] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.448 [2024-11-20 18:09:48.853328] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.448 [2024-11-20 18:09:48.863193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.448 qpair failed and we were unable to recover it. 00:24:30.448 [2024-11-20 18:09:48.873267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.448 [2024-11-20 18:09:48.873298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.448 [2024-11-20 18:09:48.873311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.448 [2024-11-20 18:09:48.873315] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.448 [2024-11-20 18:09:48.873320] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.448 [2024-11-20 18:09:48.883114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.448 qpair failed and we were unable to recover it. 00:24:30.448 [2024-11-20 18:09:48.893111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.448 [2024-11-20 18:09:48.893140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.448 [2024-11-20 18:09:48.893155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.448 [2024-11-20 18:09:48.893161] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.448 [2024-11-20 18:09:48.893167] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.448 [2024-11-20 18:09:48.903175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.448 qpair failed and we were unable to recover it. 00:24:30.448 [2024-11-20 18:09:48.913501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.448 [2024-11-20 18:09:48.913535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.448 [2024-11-20 18:09:48.913550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.448 [2024-11-20 18:09:48.913555] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.448 [2024-11-20 18:09:48.913559] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.448 [2024-11-20 18:09:48.923358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.448 qpair failed and we were unable to recover it. 00:24:30.448 [2024-11-20 18:09:48.934167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.448 [2024-11-20 18:09:48.934200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.448 [2024-11-20 18:09:48.934216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.448 [2024-11-20 18:09:48.934221] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.448 [2024-11-20 18:09:48.934226] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.448 [2024-11-20 18:09:48.943603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.448 qpair failed and we were unable to recover it. 00:24:30.448 [2024-11-20 18:09:48.953385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.448 [2024-11-20 18:09:48.953414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.448 [2024-11-20 18:09:48.953429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.448 [2024-11-20 18:09:48.953436] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.448 [2024-11-20 18:09:48.953441] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.448 [2024-11-20 18:09:48.963290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.448 qpair failed and we were unable to recover it. 00:24:30.448 [2024-11-20 18:09:48.973535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.448 [2024-11-20 18:09:48.973569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.448 [2024-11-20 18:09:48.973591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.448 [2024-11-20 18:09:48.973597] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.448 [2024-11-20 18:09:48.973601] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.709 [2024-11-20 18:09:48.983437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.709 qpair failed and we were unable to recover it. 00:24:30.709 [2024-11-20 18:09:48.993793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.709 [2024-11-20 18:09:48.993828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.709 [2024-11-20 18:09:48.993843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.709 [2024-11-20 18:09:48.993850] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.709 [2024-11-20 18:09:48.993855] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.709 [2024-11-20 18:09:49.003485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.709 qpair failed and we were unable to recover it. 00:24:30.709 [2024-11-20 18:09:49.013730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.709 [2024-11-20 18:09:49.013762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.709 [2024-11-20 18:09:49.013779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.709 [2024-11-20 18:09:49.013784] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.709 [2024-11-20 18:09:49.013789] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.709 [2024-11-20 18:09:49.023426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.709 qpair failed and we were unable to recover it. 00:24:30.709 [2024-11-20 18:09:49.033793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.709 [2024-11-20 18:09:49.033823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.709 [2024-11-20 18:09:49.033837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.709 [2024-11-20 18:09:49.033844] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.709 [2024-11-20 18:09:49.033850] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.709 [2024-11-20 18:09:49.043508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.709 qpair failed and we were unable to recover it. 00:24:30.709 [2024-11-20 18:09:49.053633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.709 [2024-11-20 18:09:49.053665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.709 [2024-11-20 18:09:49.053689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.709 [2024-11-20 18:09:49.053695] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.709 [2024-11-20 18:09:49.053700] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.709 [2024-11-20 18:09:49.063909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.709 qpair failed and we were unable to recover it. 00:24:30.709 [2024-11-20 18:09:49.074642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.709 [2024-11-20 18:09:49.074676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.709 [2024-11-20 18:09:49.074692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.709 [2024-11-20 18:09:49.074697] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.709 [2024-11-20 18:09:49.074702] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.709 [2024-11-20 18:09:49.083476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.709 qpair failed and we were unable to recover it. 00:24:30.709 [2024-11-20 18:09:49.093963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.709 [2024-11-20 18:09:49.093997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.709 [2024-11-20 18:09:49.094010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.709 [2024-11-20 18:09:49.094015] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.709 [2024-11-20 18:09:49.094019] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.709 [2024-11-20 18:09:49.103693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.709 qpair failed and we were unable to recover it. 00:24:30.709 [2024-11-20 18:09:49.114015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.709 [2024-11-20 18:09:49.114047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.709 [2024-11-20 18:09:49.114063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.709 [2024-11-20 18:09:49.114068] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.709 [2024-11-20 18:09:49.114073] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.710 [2024-11-20 18:09:49.123668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.710 qpair failed and we were unable to recover it. 00:24:30.710 [2024-11-20 18:09:49.133925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.710 [2024-11-20 18:09:49.133956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.710 [2024-11-20 18:09:49.133968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.710 [2024-11-20 18:09:49.133973] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.710 [2024-11-20 18:09:49.133978] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.710 [2024-11-20 18:09:49.143853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.710 qpair failed and we were unable to recover it. 00:24:30.710 [2024-11-20 18:09:49.154209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.710 [2024-11-20 18:09:49.154243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.710 [2024-11-20 18:09:49.154258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.710 [2024-11-20 18:09:49.154266] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.710 [2024-11-20 18:09:49.154271] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.710 [2024-11-20 18:09:49.163955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.710 qpair failed and we were unable to recover it. 00:24:30.710 [2024-11-20 18:09:49.174295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.710 [2024-11-20 18:09:49.174333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.710 [2024-11-20 18:09:49.174347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.710 [2024-11-20 18:09:49.174353] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.710 [2024-11-20 18:09:49.174358] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.710 [2024-11-20 18:09:49.183930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.710 qpair failed and we were unable to recover it. 00:24:30.710 [2024-11-20 18:09:49.194264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.710 [2024-11-20 18:09:49.194299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.710 [2024-11-20 18:09:49.194312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.710 [2024-11-20 18:09:49.194317] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.710 [2024-11-20 18:09:49.194321] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.710 [2024-11-20 18:09:49.204253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.710 qpair failed and we were unable to recover it. 00:24:30.710 [2024-11-20 18:09:49.213963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.710 [2024-11-20 18:09:49.213995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.710 [2024-11-20 18:09:49.214007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.710 [2024-11-20 18:09:49.214012] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.710 [2024-11-20 18:09:49.214017] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.710 [2024-11-20 18:09:49.224053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.710 qpair failed and we were unable to recover it. 00:24:30.710 [2024-11-20 18:09:49.234397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.710 [2024-11-20 18:09:49.234433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.710 [2024-11-20 18:09:49.234447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.710 [2024-11-20 18:09:49.234455] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.710 [2024-11-20 18:09:49.234459] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.970 [2024-11-20 18:09:49.244167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.970 qpair failed and we were unable to recover it. 00:24:30.970 [2024-11-20 18:09:49.254570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.970 [2024-11-20 18:09:49.254601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.970 [2024-11-20 18:09:49.254617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.970 [2024-11-20 18:09:49.254622] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.970 [2024-11-20 18:09:49.254627] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.970 [2024-11-20 18:09:49.264125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.970 qpair failed and we were unable to recover it. 00:24:30.970 [2024-11-20 18:09:49.274145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.970 [2024-11-20 18:09:49.274177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.970 [2024-11-20 18:09:49.274189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.970 [2024-11-20 18:09:49.274193] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.970 [2024-11-20 18:09:49.274200] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.970 [2024-11-20 18:09:49.284094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.970 qpair failed and we were unable to recover it. 00:24:30.970 [2024-11-20 18:09:49.294274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.970 [2024-11-20 18:09:49.294305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.970 [2024-11-20 18:09:49.294319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.970 [2024-11-20 18:09:49.294326] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.970 [2024-11-20 18:09:49.294334] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.970 [2024-11-20 18:09:49.304374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.970 qpair failed and we were unable to recover it. 00:24:30.970 [2024-11-20 18:09:49.314747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.970 [2024-11-20 18:09:49.314777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.970 [2024-11-20 18:09:49.314789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.971 [2024-11-20 18:09:49.314794] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.971 [2024-11-20 18:09:49.314798] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.971 [2024-11-20 18:09:49.324370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.971 qpair failed and we were unable to recover it. 00:24:30.971 [2024-11-20 18:09:49.334720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.971 [2024-11-20 18:09:49.334757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.971 [2024-11-20 18:09:49.334769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.971 [2024-11-20 18:09:49.334774] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.971 [2024-11-20 18:09:49.334778] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.971 [2024-11-20 18:09:49.344621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.971 qpair failed and we were unable to recover it. 00:24:30.971 [2024-11-20 18:09:49.354859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.971 [2024-11-20 18:09:49.354892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.971 [2024-11-20 18:09:49.354905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.971 [2024-11-20 18:09:49.354912] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.971 [2024-11-20 18:09:49.354919] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.971 [2024-11-20 18:09:49.364481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.971 qpair failed and we were unable to recover it. 00:24:30.971 [2024-11-20 18:09:49.374547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.971 [2024-11-20 18:09:49.374576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.971 [2024-11-20 18:09:49.374588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.971 [2024-11-20 18:09:49.374595] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.971 [2024-11-20 18:09:49.374601] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.971 [2024-11-20 18:09:49.384459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.971 qpair failed and we were unable to recover it. 00:24:30.971 [2024-11-20 18:09:49.394754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.971 [2024-11-20 18:09:49.394787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.971 [2024-11-20 18:09:49.394800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.971 [2024-11-20 18:09:49.394805] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.971 [2024-11-20 18:09:49.394809] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.971 [2024-11-20 18:09:49.404446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.971 qpair failed and we were unable to recover it. 00:24:30.971 [2024-11-20 18:09:49.414923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.971 [2024-11-20 18:09:49.414958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.971 [2024-11-20 18:09:49.414978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.971 [2024-11-20 18:09:49.414984] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.971 [2024-11-20 18:09:49.414988] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.971 [2024-11-20 18:09:49.424694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.971 qpair failed and we were unable to recover it. 00:24:30.971 [2024-11-20 18:09:49.434957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.971 [2024-11-20 18:09:49.434989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.971 [2024-11-20 18:09:49.435000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.971 [2024-11-20 18:09:49.435005] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.971 [2024-11-20 18:09:49.435010] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.971 [2024-11-20 18:09:49.444737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.971 qpair failed and we were unable to recover it. 00:24:30.971 [2024-11-20 18:09:49.454715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.971 [2024-11-20 18:09:49.454746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.971 [2024-11-20 18:09:49.454760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.971 [2024-11-20 18:09:49.454767] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.971 [2024-11-20 18:09:49.454775] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.971 [2024-11-20 18:09:49.464668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.971 qpair failed and we were unable to recover it. 00:24:30.971 [2024-11-20 18:09:49.475225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.971 [2024-11-20 18:09:49.475257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.971 [2024-11-20 18:09:49.475271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.971 [2024-11-20 18:09:49.475278] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.971 [2024-11-20 18:09:49.475284] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:30.971 [2024-11-20 18:09:49.484795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.971 qpair failed and we were unable to recover it. 00:24:30.971 [2024-11-20 18:09:49.495234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.971 [2024-11-20 18:09:49.495266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.971 [2024-11-20 18:09:49.495281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.971 [2024-11-20 18:09:49.495288] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.971 [2024-11-20 18:09:49.495298] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.232 [2024-11-20 18:09:49.504736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.232 qpair failed and we were unable to recover it. 00:24:31.232 [2024-11-20 18:09:49.515319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.232 [2024-11-20 18:09:49.515348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.232 [2024-11-20 18:09:49.515359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.232 [2024-11-20 18:09:49.515364] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.232 [2024-11-20 18:09:49.515369] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.232 [2024-11-20 18:09:49.524978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.232 qpair failed and we were unable to recover it. 00:24:31.232 [2024-11-20 18:09:49.535109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.232 [2024-11-20 18:09:49.535143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.232 [2024-11-20 18:09:49.535156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.232 [2024-11-20 18:09:49.535163] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.232 [2024-11-20 18:09:49.535169] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.232 [2024-11-20 18:09:49.545148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.232 qpair failed and we were unable to recover it. 00:24:31.232 [2024-11-20 18:09:49.555411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.232 [2024-11-20 18:09:49.555448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.232 [2024-11-20 18:09:49.555462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.232 [2024-11-20 18:09:49.555470] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.555475] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.565378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:31.233 [2024-11-20 18:09:49.576135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.233 [2024-11-20 18:09:49.576167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.233 [2024-11-20 18:09:49.576180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.233 [2024-11-20 18:09:49.576185] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.576191] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.585237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:31.233 [2024-11-20 18:09:49.595005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.233 [2024-11-20 18:09:49.595036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.233 [2024-11-20 18:09:49.595048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.233 [2024-11-20 18:09:49.595053] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.595057] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.605329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:31.233 [2024-11-20 18:09:49.615651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.233 [2024-11-20 18:09:49.615682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.233 [2024-11-20 18:09:49.615694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.233 [2024-11-20 18:09:49.615700] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.615704] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.625277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:31.233 [2024-11-20 18:09:49.636168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.233 [2024-11-20 18:09:49.636201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.233 [2024-11-20 18:09:49.636213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.233 [2024-11-20 18:09:49.636218] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.636222] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.645120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:31.233 [2024-11-20 18:09:49.655055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.233 [2024-11-20 18:09:49.655092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.233 [2024-11-20 18:09:49.655104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.233 [2024-11-20 18:09:49.655109] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.655113] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.665269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:31.233 [2024-11-20 18:09:49.675662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.233 [2024-11-20 18:09:49.675693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.233 [2024-11-20 18:09:49.675705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.233 [2024-11-20 18:09:49.675710] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.675715] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.685351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:31.233 [2024-11-20 18:09:49.695842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.233 [2024-11-20 18:09:49.695877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.233 [2024-11-20 18:09:49.695889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.233 [2024-11-20 18:09:49.695894] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.695899] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.705487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:31.233 [2024-11-20 18:09:49.715775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.233 [2024-11-20 18:09:49.715808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.233 [2024-11-20 18:09:49.715820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.233 [2024-11-20 18:09:49.715825] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.715829] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.725183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:31.233 [2024-11-20 18:09:49.736559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.233 [2024-11-20 18:09:49.736592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.233 [2024-11-20 18:09:49.736604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.233 [2024-11-20 18:09:49.736609] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.736613] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.745974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:31.233 [2024-11-20 18:09:49.756584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.233 [2024-11-20 18:09:49.756615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.233 [2024-11-20 18:09:49.756630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.233 [2024-11-20 18:09:49.756635] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.233 [2024-11-20 18:09:49.756639] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:31.233 [2024-11-20 18:09:49.765349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.233 qpair failed and we were unable to recover it. 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Read completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Read completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Read completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Read completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Read completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Read completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Read completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Read completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Read completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Read completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.618 Write completed with error (sct=0, sc=8) 00:24:32.618 starting I/O failed 00:24:32.619 Read completed with error (sct=0, sc=8) 00:24:32.619 starting I/O failed 00:24:32.619 Write completed with error (sct=0, sc=8) 00:24:32.619 starting I/O failed 00:24:32.619 Write completed with error (sct=0, sc=8) 00:24:32.619 starting I/O failed 00:24:32.619 [2024-11-20 18:09:50.771104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.619 [2024-11-20 18:09:50.778180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.619 [2024-11-20 18:09:50.778227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.619 [2024-11-20 18:09:50.778252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.619 [2024-11-20 18:09:50.778261] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.619 [2024-11-20 18:09:50.778268] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.619 [2024-11-20 18:09:50.788479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.619 qpair failed and we were unable to recover it. 00:24:32.619 [2024-11-20 18:09:50.798751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.619 [2024-11-20 18:09:50.798791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.619 [2024-11-20 18:09:50.798807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.619 [2024-11-20 18:09:50.798820] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.619 [2024-11-20 18:09:50.798827] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.619 [2024-11-20 18:09:50.808311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.619 qpair failed and we were unable to recover it. 00:24:32.619 [2024-11-20 18:09:50.818669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.619 [2024-11-20 18:09:50.818704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.619 [2024-11-20 18:09:50.818722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.619 [2024-11-20 18:09:50.818728] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.619 [2024-11-20 18:09:50.818734] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:32.619 [2024-11-20 18:09:50.828919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:32.619 qpair failed and we were unable to recover it. 00:24:32.619 [2024-11-20 18:09:50.839409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.619 [2024-11-20 18:09:50.839444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.619 [2024-11-20 18:09:50.839459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.619 [2024-11-20 18:09:50.839464] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.619 [2024-11-20 18:09:50.839469] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:32.619 [2024-11-20 18:09:50.848489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:32.619 qpair failed and we were unable to recover it. 00:24:32.619 [2024-11-20 18:09:50.848674] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:24:32.619 A controller has encountered a failure and is being reset. 00:24:32.619 [2024-11-20 18:09:50.848807] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:32.619 [2024-11-20 18:09:50.851331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:24:32.619 Controller properly reset. 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Read completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 Write completed with error (sct=0, sc=8) 00:24:33.559 starting I/O failed 00:24:33.559 [2024-11-20 18:09:51.867122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.559 Initializing NVMe Controllers 00:24:33.559 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:33.559 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:33.559 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:33.559 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:33.559 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:33.559 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:33.559 Initialization complete. Launching workers. 00:24:33.559 Starting thread on core 1 00:24:33.559 Starting thread on core 2 00:24:33.559 Starting thread on core 3 00:24:33.559 Starting thread on core 0 00:24:33.559 18:09:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:33.559 00:24:33.559 real 0m13.683s 00:24:33.559 user 0m29.032s 00:24:33.559 sys 0m2.422s 00:24:33.559 18:09:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.559 18:09:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:33.559 ************************************ 00:24:33.559 END TEST nvmf_target_disconnect_tc2 00:24:33.559 ************************************ 00:24:33.559 18:09:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:24:33.560 18:09:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:24:33.560 18:09:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:33.560 18:09:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.560 18:09:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:33.560 ************************************ 00:24:33.560 START TEST nvmf_target_disconnect_tc3 00:24:33.560 ************************************ 00:24:33.560 18:09:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:24:33.560 18:09:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3689407 00:24:33.560 18:09:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:24:33.560 18:09:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:24:35.518 18:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3687699 00:24:35.518 18:09:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Read completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 Write completed with error (sct=0, sc=8) 00:24:36.899 starting I/O failed 00:24:36.899 [2024-11-20 18:09:55.211310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:36.899 [2024-11-20 18:09:55.213890] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:36.899 [2024-11-20 18:09:55.213907] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:36.899 [2024-11-20 18:09:55.213913] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:37.837 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3687699 Killed "${NVMF_APP[@]}" "$@" 00:24:37.837 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:24:37.837 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:37.837 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:37.837 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.837 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:37.837 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3690085 00:24:37.837 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3690085 00:24:37.837 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:37.838 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3690085 ']' 00:24:37.838 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.838 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.838 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.838 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.838 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:37.838 [2024-11-20 18:09:56.080062] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:24:37.838 [2024-11-20 18:09:56.080115] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.838 [2024-11-20 18:09:56.170445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.838 [2024-11-20 18:09:56.200060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.838 [2024-11-20 18:09:56.200088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.838 [2024-11-20 18:09:56.200094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.838 [2024-11-20 18:09:56.200098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.838 [2024-11-20 18:09:56.200102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.838 [2024-11-20 18:09:56.201326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:37.838 [2024-11-20 18:09:56.201475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:37.838 [2024-11-20 18:09:56.201612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:37.838 [2024-11-20 18:09:56.201614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:37.838 [2024-11-20 18:09:56.218221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.838 qpair failed and we were unable to recover it. 00:24:37.838 [2024-11-20 18:09:56.220936] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:37.838 [2024-11-20 18:09:56.220952] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:37.838 [2024-11-20 18:09:56.220957] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:38.408 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.408 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:38.408 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.408 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.408 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:38.408 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.408 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:38.408 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.408 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:38.668 Malloc0 00:24:38.668 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.668 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:38.668 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.668 18:09:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:38.668 [2024-11-20 18:09:56.998569] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x242f320/0x243ae40) succeed. 00:24:38.668 [2024-11-20 18:09:57.008886] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24309b0/0x24bae80) succeed. 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:38.668 [2024-11-20 18:09:57.138899] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.668 18:09:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3689407 00:24:38.928 [2024-11-20 18:09:57.225310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:38.928 qpair failed and we were unable to recover it. 00:24:38.928 [2024-11-20 18:09:57.227602] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:38.928 [2024-11-20 18:09:57.227616] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:38.928 [2024-11-20 18:09:57.227625] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:39.867 [2024-11-20 18:09:58.231850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:39.868 qpair failed and we were unable to recover it. 00:24:39.868 [2024-11-20 18:09:58.234513] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:39.868 [2024-11-20 18:09:58.234525] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:39.868 [2024-11-20 18:09:58.234530] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:40.809 [2024-11-20 18:09:59.238936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.809 qpair failed and we were unable to recover it. 00:24:40.809 [2024-11-20 18:09:59.241160] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:40.809 [2024-11-20 18:09:59.241170] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:40.809 [2024-11-20 18:09:59.241175] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:41.750 [2024-11-20 18:10:00.245593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.750 qpair failed and we were unable to recover it. 00:24:41.750 [2024-11-20 18:10:00.247906] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:41.750 [2024-11-20 18:10:00.247919] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:41.750 [2024-11-20 18:10:00.247924] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:43.131 [2024-11-20 18:10:01.252328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.131 qpair failed and we were unable to recover it. 00:24:43.131 [2024-11-20 18:10:01.254899] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:43.131 [2024-11-20 18:10:01.254911] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:43.131 [2024-11-20 18:10:01.254916] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:44.074 [2024-11-20 18:10:02.259160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.074 qpair failed and we were unable to recover it. 00:24:44.074 [2024-11-20 18:10:02.261773] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:44.074 [2024-11-20 18:10:02.261785] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:44.074 [2024-11-20 18:10:02.261789] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:45.013 [2024-11-20 18:10:03.266063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:45.013 qpair failed and we were unable to recover it. 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Write completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 Read completed with error (sct=0, sc=8) 00:24:45.953 starting I/O failed 00:24:45.953 [2024-11-20 18:10:04.271901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:45.953 [2024-11-20 18:10:04.274273] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.953 [2024-11-20 18:10:04.274290] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.953 [2024-11-20 18:10:04.274296] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:46.896 [2024-11-20 18:10:05.278684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:46.896 qpair failed and we were unable to recover it. 00:24:46.896 [2024-11-20 18:10:05.281012] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:46.896 [2024-11-20 18:10:05.281025] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:46.896 [2024-11-20 18:10:05.281029] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:47.838 [2024-11-20 18:10:06.285359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:47.838 qpair failed and we were unable to recover it. 00:24:47.838 [2024-11-20 18:10:06.285546] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:24:47.838 A controller has encountered a failure and is being reset. 00:24:47.838 Resorting to new failover address 192.168.100.9 00:24:47.838 [2024-11-20 18:10:06.285656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:47.838 [2024-11-20 18:10:06.285723] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:47.838 [2024-11-20 18:10:06.288295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:47.838 Controller properly reset. 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Write completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 Read completed with error (sct=0, sc=8) 00:24:49.224 starting I/O failed 00:24:49.224 [2024-11-20 18:10:07.331678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Read completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 Write completed with error (sct=0, sc=8) 00:24:50.164 starting I/O failed 00:24:50.164 [2024-11-20 18:10:08.361831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.164 Initializing NVMe Controllers 00:24:50.164 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.164 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.164 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:50.164 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:50.164 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:50.164 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:50.164 Initialization complete. Launching workers. 00:24:50.164 Starting thread on core 1 00:24:50.164 Starting thread on core 2 00:24:50.164 Starting thread on core 3 00:24:50.164 Starting thread on core 0 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:24:50.164 00:24:50.164 real 0m16.409s 00:24:50.164 user 0m59.165s 00:24:50.164 sys 0m3.713s 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:50.164 ************************************ 00:24:50.164 END TEST nvmf_target_disconnect_tc3 00:24:50.164 ************************************ 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:50.164 rmmod nvme_rdma 00:24:50.164 rmmod nvme_fabrics 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3690085 ']' 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3690085 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3690085 ']' 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3690085 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3690085 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3690085' 00:24:50.164 killing process with pid 3690085 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3690085 00:24:50.164 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3690085 00:24:50.425 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.425 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:50.425 00:24:50.425 real 0m40.453s 00:24:50.425 user 2m21.587s 00:24:50.425 sys 0m13.113s 00:24:50.425 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.425 18:10:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:50.425 ************************************ 00:24:50.425 END TEST nvmf_target_disconnect 00:24:50.425 ************************************ 00:24:50.425 18:10:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:50.425 00:24:50.425 real 5m51.464s 00:24:50.425 user 13m8.825s 00:24:50.425 sys 1m50.207s 00:24:50.425 18:10:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.425 18:10:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.425 ************************************ 00:24:50.425 END TEST nvmf_host 00:24:50.425 ************************************ 00:24:50.425 18:10:08 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:24:50.425 00:24:50.425 real 19m27.476s 00:24:50.425 user 45m48.062s 00:24:50.425 sys 6m3.532s 00:24:50.425 18:10:08 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.425 18:10:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:50.425 ************************************ 00:24:50.425 END TEST nvmf_rdma 00:24:50.425 ************************************ 00:24:50.425 18:10:08 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:50.425 18:10:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:50.425 18:10:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.425 18:10:08 -- common/autotest_common.sh@10 -- # set +x 00:24:50.425 ************************************ 00:24:50.425 START TEST spdkcli_nvmf_rdma 00:24:50.425 ************************************ 00:24:50.425 18:10:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:50.687 * Looking for test storage... 00:24:50.687 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:50.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.687 --rc genhtml_branch_coverage=1 00:24:50.687 --rc genhtml_function_coverage=1 00:24:50.687 --rc genhtml_legend=1 00:24:50.687 --rc geninfo_all_blocks=1 00:24:50.687 --rc geninfo_unexecuted_blocks=1 00:24:50.687 00:24:50.687 ' 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:50.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.687 --rc genhtml_branch_coverage=1 00:24:50.687 --rc genhtml_function_coverage=1 00:24:50.687 --rc genhtml_legend=1 00:24:50.687 --rc geninfo_all_blocks=1 00:24:50.687 --rc geninfo_unexecuted_blocks=1 00:24:50.687 00:24:50.687 ' 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:50.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.687 --rc genhtml_branch_coverage=1 00:24:50.687 --rc genhtml_function_coverage=1 00:24:50.687 --rc genhtml_legend=1 00:24:50.687 --rc geninfo_all_blocks=1 00:24:50.687 --rc geninfo_unexecuted_blocks=1 00:24:50.687 00:24:50.687 ' 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:50.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.687 --rc genhtml_branch_coverage=1 00:24:50.687 --rc genhtml_function_coverage=1 00:24:50.687 --rc genhtml_legend=1 00:24:50.687 --rc geninfo_all_blocks=1 00:24:50.687 --rc geninfo_unexecuted_blocks=1 00:24:50.687 00:24:50.687 ' 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.687 18:10:09 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.688 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3692733 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3692733 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 3692733 ']' 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.688 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:50.688 [2024-11-20 18:10:09.217930] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:24:50.688 [2024-11-20 18:10:09.217993] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3692733 ] 00:24:50.948 [2024-11-20 18:10:09.297091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:50.948 [2024-11-20 18:10:09.336193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.948 [2024-11-20 18:10:09.336195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.519 18:10:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.519 18:10:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:59.665 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:59.665 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:59.665 Found net devices under 0000:98:00.0: mlx_0_0 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:59.665 Found net devices under 0000:98:00.1: mlx_0_1 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:59.665 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:59.666 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:59.666 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:24:59.666 altname enp152s0f0np0 00:24:59.666 altname ens817f0np0 00:24:59.666 inet 192.168.100.8/24 scope global mlx_0_0 00:24:59.666 valid_lft forever preferred_lft forever 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:59.666 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:59.666 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:24:59.666 altname enp152s0f1np1 00:24:59.666 altname ens817f1np1 00:24:59.666 inet 192.168.100.9/24 scope global mlx_0_1 00:24:59.666 valid_lft forever preferred_lft forever 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:59.666 18:10:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:59.666 192.168.100.9' 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:59.666 192.168.100.9' 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:59.666 192.168.100.9' 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:59.666 18:10:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:59.666 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:59.666 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:59.666 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:59.666 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:59.666 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:59.666 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:59.666 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:59.666 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:59.666 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:59.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:59.666 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:59.666 ' 00:25:02.969 [2024-11-20 18:10:20.803375] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1728260/0x1735b80) succeed. 00:25:02.969 [2024-11-20 18:10:20.817879] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1729940/0x17b5bc0) succeed. 00:25:03.912 [2024-11-20 18:10:22.232424] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:25:06.456 [2024-11-20 18:10:24.760227] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:25:09.002 [2024-11-20 18:10:26.963240] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:25:10.386 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:10.386 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:10.386 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:10.386 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:10.386 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:10.386 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:10.386 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:10.386 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:10.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:10.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:10.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:10.386 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:10.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:10.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:10.386 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:10.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:10.387 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:10.387 18:10:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:10.387 18:10:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:10.387 18:10:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:10.387 18:10:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:10.387 18:10:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.387 18:10:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:10.387 18:10:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:25:10.387 18:10:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:10.648 18:10:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:10.916 18:10:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:10.916 18:10:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:10.917 18:10:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:10.917 18:10:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:10.917 18:10:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:10.917 18:10:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.917 18:10:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:10.917 18:10:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:10.917 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:10.917 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:10.917 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:10.917 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:25:10.917 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:25:10.917 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:10.917 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:10.917 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:10.917 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:10.917 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:10.917 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:10.917 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:10.917 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:10.917 ' 00:25:16.282 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:16.282 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:16.282 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:16.282 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:16.282 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:25:16.282 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:25:16.282 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:16.282 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:16.282 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:16.282 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:16.282 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:16.282 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:16.282 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:16.282 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3692733 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 3692733 ']' 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 3692733 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3692733 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3692733' 00:25:16.282 killing process with pid 3692733 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 3692733 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 3692733 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:16.282 rmmod nvme_rdma 00:25:16.282 rmmod nvme_fabrics 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:16.282 00:25:16.282 real 0m25.760s 00:25:16.282 user 0m56.712s 00:25:16.282 sys 0m6.760s 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.282 18:10:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:16.282 ************************************ 00:25:16.282 END TEST spdkcli_nvmf_rdma 00:25:16.282 ************************************ 00:25:16.282 18:10:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:16.282 18:10:34 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:16.282 18:10:34 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:16.282 18:10:34 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:16.282 18:10:34 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:16.282 18:10:34 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:16.282 18:10:34 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:16.282 18:10:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.282 18:10:34 -- common/autotest_common.sh@10 -- # set +x 00:25:16.282 18:10:34 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:16.282 18:10:34 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:16.282 18:10:34 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:16.282 18:10:34 -- common/autotest_common.sh@10 -- # set +x 00:25:24.416 INFO: APP EXITING 00:25:24.416 INFO: killing all VMs 00:25:24.416 INFO: killing vhost app 00:25:24.416 INFO: EXIT DONE 00:25:27.718 Waiting for block devices as requested 00:25:27.718 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:27.718 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:27.718 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:27.718 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:27.979 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:27.979 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:27.979 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:28.239 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:28.239 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:25:28.500 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:28.500 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:28.500 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:28.761 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:28.761 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:28.761 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:28.761 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:29.021 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:33.226 Cleaning 00:25:33.226 Removing: /var/run/dpdk/spdk0/config 00:25:33.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:33.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:33.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:33.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:33.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:33.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:33.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:33.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:33.226 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:33.226 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:33.226 Removing: /var/run/dpdk/spdk1/config 00:25:33.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:33.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:33.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:33.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:33.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:33.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:33.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:33.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:33.226 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:33.226 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:33.226 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:33.226 Removing: /var/run/dpdk/spdk2/config 00:25:33.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:33.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:33.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:33.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:33.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:33.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:33.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:33.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:33.226 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:33.226 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:33.226 Removing: /var/run/dpdk/spdk3/config 00:25:33.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:33.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:33.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:33.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:33.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:33.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:33.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:33.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:33.226 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:33.226 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:33.226 Removing: /var/run/dpdk/spdk4/config 00:25:33.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:33.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:33.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:33.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:33.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:33.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:33.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:33.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:33.226 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:33.226 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:33.226 Removing: /dev/shm/bdevperf_trace.pid3372397 00:25:33.226 Removing: /dev/shm/bdev_svc_trace.1 00:25:33.226 Removing: /dev/shm/nvmf_trace.0 00:25:33.226 Removing: /dev/shm/spdk_tgt_trace.pid3320743 00:25:33.226 Removing: /var/run/dpdk/spdk0 00:25:33.226 Removing: /var/run/dpdk/spdk1 00:25:33.226 Removing: /var/run/dpdk/spdk2 00:25:33.226 Removing: /var/run/dpdk/spdk3 00:25:33.226 Removing: /var/run/dpdk/spdk4 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3319117 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3320743 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3321453 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3322496 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3322841 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3323915 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3324235 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3324516 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3329802 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3330583 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3330974 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3331339 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3331701 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3332011 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3332231 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3332581 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3332967 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3334043 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3337434 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3337782 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3338146 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3338370 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3338768 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3339081 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3339480 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3339547 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3339844 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3339978 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3340215 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3340438 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3340998 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3341264 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3341561 00:25:33.226 Removing: /var/run/dpdk/spdk_pid3346642 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3352155 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3365290 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3366230 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3372397 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3372747 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3378148 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3385360 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3388646 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3401231 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3433315 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3438207 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3493644 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3500333 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3507533 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3519015 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3571869 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3572887 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3573982 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3575385 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3581295 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3590601 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3599754 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3600786 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3601820 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3602958 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3603446 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3609081 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3609148 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3614231 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3614903 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3615570 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3616485 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3616577 00:25:33.227 Removing: /var/run/dpdk/spdk_pid3622546 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3623237 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3629215 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3632562 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3639383 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3651681 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3651684 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3678687 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3678939 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3686308 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3686950 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3689407 00:25:33.487 Removing: /var/run/dpdk/spdk_pid3692733 00:25:33.487 Clean 00:25:33.487 18:10:51 -- common/autotest_common.sh@1453 -- # return 0 00:25:33.487 18:10:51 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:33.487 18:10:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.487 18:10:51 -- common/autotest_common.sh@10 -- # set +x 00:25:33.487 18:10:51 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:33.487 18:10:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.487 18:10:51 -- common/autotest_common.sh@10 -- # set +x 00:25:33.488 18:10:51 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:33.488 18:10:51 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:25:33.488 18:10:51 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:25:33.488 18:10:51 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:33.488 18:10:51 -- spdk/autotest.sh@398 -- # hostname 00:25:33.488 18:10:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:25:33.749 geninfo: WARNING: invalid characters removed from testname! 00:26:00.336 18:11:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:00.336 18:11:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:02.244 18:11:20 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:04.151 18:11:22 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:06.061 18:11:24 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:07.441 18:11:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:09.365 18:11:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:09.365 18:11:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:26:09.365 18:11:27 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:26:09.365 18:11:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:09.365 18:11:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:09.365 18:11:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:26:09.365 + [[ -n 3234643 ]] 00:26:09.365 + sudo kill 3234643 00:26:09.375 [Pipeline] } 00:26:09.388 [Pipeline] // stage 00:26:09.393 [Pipeline] } 00:26:09.406 [Pipeline] // timeout 00:26:09.411 [Pipeline] } 00:26:09.424 [Pipeline] // catchError 00:26:09.428 [Pipeline] } 00:26:09.443 [Pipeline] // wrap 00:26:09.448 [Pipeline] } 00:26:09.460 [Pipeline] // catchError 00:26:09.469 [Pipeline] stage 00:26:09.471 [Pipeline] { (Epilogue) 00:26:09.483 [Pipeline] catchError 00:26:09.484 [Pipeline] { 00:26:09.496 [Pipeline] echo 00:26:09.498 Cleanup processes 00:26:09.503 [Pipeline] sh 00:26:09.791 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:26:09.792 3711518 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:26:09.805 [Pipeline] sh 00:26:10.091 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:26:10.092 ++ grep -v 'sudo pgrep' 00:26:10.092 ++ awk '{print $1}' 00:26:10.092 + sudo kill -9 00:26:10.092 + true 00:26:10.103 [Pipeline] sh 00:26:10.389 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:20.391 [Pipeline] sh 00:26:20.679 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:20.679 Artifacts sizes are good 00:26:20.694 [Pipeline] archiveArtifacts 00:26:20.701 Archiving artifacts 00:26:20.861 [Pipeline] sh 00:26:21.180 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:26:21.192 [Pipeline] cleanWs 00:26:21.202 [WS-CLEANUP] Deleting project workspace... 00:26:21.202 [WS-CLEANUP] Deferred wipeout is used... 00:26:21.209 [WS-CLEANUP] done 00:26:21.211 [Pipeline] } 00:26:21.227 [Pipeline] // catchError 00:26:21.237 [Pipeline] sh 00:26:21.522 + logger -p user.info -t JENKINS-CI 00:26:21.533 [Pipeline] } 00:26:21.545 [Pipeline] // stage 00:26:21.550 [Pipeline] } 00:26:21.563 [Pipeline] // node 00:26:21.567 [Pipeline] End of Pipeline 00:26:21.601 Finished: SUCCESS